Re: Accepted valknut 0.3.7-1 (i386 source)

2005-03-19 Thread Pasi Savilaakso
Kirjoitit viestissäsi (lähetysaika lauantai, 19. maaliskuuta 2005 02:53):
> Hi Pasi,
>
> On Friday, 18 Mar 2005, you wrote:
> > Changes:
> >  valknut (0.3.7-1) unstable; urgency=high
> >  .
> >* New upstream release (Closes: #289643, #269952, #265284, #270096,
> > #286234)
>
> is there any reason for not giving some more explanation, when closing
> bugs with urgency=high and only listing "New upstream release" as
> only changelog entry.
>
> I would like to have some more explanation for this in the changelog.
>
hello Martin,
There really isn't any more to say. There is nothing else changed in package 
than new source so I don't really know what else I could say. priority High 
is since dcgui-qt is totally unusable with new libxml AND if one tries to 
start dcgui-qt with new libxml it destroys ones dcqui-qt config file. But 
again nothing changed in package. just recompile against new libxml would 
remove those unusability issues. oh, one thing I should had said was that I 
updated man page to match new name, but I forgot it because I made it when I 
was working with 0.3.3.

Regards, Pasi Savilaakso


pgpyZ4etExvbB.pgp
Description: PGP signature


Re: Debian DPL Debate Comments

2005-03-19 Thread Adrian von Bidder
[cc to you - I don't know if you read the list]

On Friday 18 March 2005 17.22, Ritesh Raj Sarraf wrote:

> As for example, it's been now around 7 years for me now using Linux and I
> do have a fair amount of knowledge now. It would be great if DD's here
> could harness the skills in "wannabe contributors" like us and prepare us
> help this marvelous community. In particular to me, I'm willing to
> contribute to Debian as maybe a DD, Package Maintainer, SysAdmin or any
> other work you people find as an undiscovered skill in me.

It's just Not The Way It Works(tm)

Nobody will assign you some work 'just so' - if you want to contribute, 
think about what do *you* think could be better in Debian?  How would you 
do it?  Then, find people (or the right mailing list) who do some work in 
that area (if you don't find the right place, a question to this list is 
o.k.) and work with them.

That said, if you just have some spare time and want to do something for 
Debian, fire up your web browser and browse Debian's bug data base on 
.  Bug fixing help is always welcome. If you're not 
sure what you should do with a particular bug, you can always ask on the 
#debian-bugs or #debian-devel IRC channel on irc.debian.org.

greetings
-- vbi

-- 
Beware of the FUD - know your enemies. This week
* Patent Law, and how it is currently abused. *
http://fortytwo.ch/opinion


pgp7syn8XF1ss.pgp
Description: PGP signature


Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-19 Thread Andrew M.A. Cater
On Fri, Mar 18, 2005 at 03:23:18AM -0800, Michael K. Edwards wrote:
>  Just because a full Debian doesn't usually
> fit today's embedded footprint doesn't mean it won't fit tomorrow's,
> and in the meantime Debian's toolchain, kernel, and initrd-tools are
> probably the best embedded Linux development and packaging environment
> going.  
> 
> I think Sarge on ARM has the potential to greatly reduce the learning
> curve for some kinds of embedded development, especially if Iyonix
> succeeds in its niche (long live the Acorn!).  In particular I look
> forward to being able to woo certain mobile computing colleagues,
> currently doomed to PocketPC, with a proper native development
> environment.  The same goes for some apparent "doorstop" arches:
> mipsel in networking and storage (e. g., SoC from Broadcom in
> set-tops, wireless gateways, and micro-NAS) and m68k in device control
> (68332 peripheral support, anyone?).
> 
This is _exactly_ so: I've just had two colleagues get work to buy them
Xscale embedded processor boards - they'd just bought some slower boards
for themselves. The boards come with a cut down version of Debian stable
_BECAUSE OF THE TOOLCHAIN AND CROSS COMPILATION_ and because it all just
works :) One of them now want's testing .iso's.
> 
> Likewise, minority-architecture autobuilders are one reason why Debian
> is really the only organization I trust to QA a toolchain any more. 
> For instance, compiling KDE for all of them expands the C++ stress
> test in a really useful way.  Even better if at least a couple of
> people actually run big globs of GUI on their kotatsu and catch
> run-time problems like #292673 (grave glibc bug, spotted with
> evolution on ia64).
> 

I don't have an Alpha to run as a desktop any more or a Sparc32 - they
were loan machines from my workplace - when I did, it was insanely easy 
to install identical software across the three architectures and have 
the same environment, features and ease of administration.  
If you have to administer many machines, that familiarity saves man years.

> Although sarge's long cycle has been frustrating for many people, if
> you ask me it's just as well that Debian never put the label "stable"
> on kernel 2.6.<7 (i. e., pre-objrmap), gcc 3.<2.3+ (not just C++, but
> nagging C and optimizer problems, often exposed by non-i386 kernels,
> in all previous 3.x), or glibc 2.3.(before next week or so, given
> #292673).  
> 
Red Hat and Novell are putting big money into releasing Enterprise
versions _less often_ and then supporting them for five years or seven
years in order to assure stability. That's the model that Debian has 
(inadvertently) had for years. Their targetted release cycle is 18
months - 2 years. IMHO The quantity of software is minimal in these
enterprise distros, testing seems poor and the overall quality variable.  
When you need something like a library you're used to in Debian,it's 
normally not there and you have to dig round the net for  
and trust to luck. The RHEL point releases are not great - and may change 
underlying stuff without telling you. [There is a prerelease of gcc 4 in 
RHEL 4 - a snapshot version from 12122004 - I fully expect them to introduce 
full gcc4 on one of their point releases _without bumping the version 
number_ :( ]

> None of that says that the world has a right to put the burden of
> sysadmining the broadest single software QA effort in history on the
> Debian release team's shoulders.  But if specific technical problems
> can be identified and addressed to where the infrastructure equipment
> and teams can stand it, keeping Debian Universal for at least one more
> cycle would be Herculean but not impossible.  I think this is one of
> those cases where the last 20% of the effort invested (coaxing along
> minority architectures) provides 80% of the value (stable actually
> means something).
> 
> Or look at it this way:  supporting minority architectures has
> revealed all sorts of scalability problems in Debian.  Some of those
> problems will be really nasty if we wait until the major architectures
> are in crisis to face them.  The doorstops are the canaries in the
> coal mine that start to suffocate before the big guys notice air
> quality problems.  Don't like performing CPR on canaries?  Don't put
> 'em down in coal mines!  Wait, there's something wrong with that logic
> ...

Full ACK to the above. I can see where the Vancouver proposals are coming 
from but the cut of what's a valid top tier isn't quite right yet. 
I'm not sure that ia64, for example, is worth the candle at all - but we 
are virtually the only distribution supporting it well. We _are_ the only 
distribution supporting hppa - now if we could just get Debian running on 
Superdome, I could suggest a replacement for hpux :) Ditto sole major
distribution for mips and sundry others. 

I run testing _at work_ on several machines because I need the slightly 
more up to date software and relative stability but that's only pen

Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-19 Thread Steve Langasek
Hi Greg,

On Tue, Mar 15, 2005 at 02:10:47PM -0500, Greg Folkert wrote:
> > > BTW, I am not sure this is really a good way to measure the use of an 
> > > architecture, mainly because users could use a local mirror if they have 
> > > a lot of machines of the same architecture. How about using popcon *in 
> > > addition* to that?

> > This isn't being used to measure the use of the architecture; it's being
> > used to measure the *download frequency* for the architecture, which is
> > precisely the criterion that should be used in deciding how to structure
> > the mirror network.

> Okay, I have to comment here, seeing that I personally have at two
> separate locations, two complete mirrors, that I use nearly everyday.
> They only update when a change in the archive is detected. That means
> *MY* $PRETTY_BIG_NUMBER of usages of my own mirrors in each locale will
> mean nothing. I do my own mirror(s) so as to reduce the load on the
> Debian network. I actually scaled back what I use, now only having 5
> arches I support, SPARC(and UltraSPARC), Alpha, HPPA-RISC, PowerPC and
> x86(Intel and otherwise). I dropped IA64 a while ago and will pickup
> X86_AMD64 when it become part of Sid Proper.

> How would you address the fact the bulk of my usage is not even seen by
> your network.

Hrm, in what sense is this something that needs to be "addressed" at all?
If you use an internal mirror for your heavy internal usage, then surely
you, as a user, don't need a diverse network of full public mirrors -- you
just need one, solid mirror to download from, don't you?

It makes perfect sense to me that if i386(+amd64) represents 10x as many
downloads as all other archs combined, and the other archs combined
represent 10x as much disk space needed as i386(+amd64), it's to our
advantage to split the two so that we can benefit from mirror operators
with either high bandwidth and little disk space (i386) or lots of disk
space but less bandwidth (SCC).

> > > >architecture requirements:
> > > I would add as for the core set architecture:
> > > - there must be a developer-accessible debian.org machine for the 
> > > architecture.
> > 
> > This gets a little tricky for non-RC architectures, because if it's not
> > already (or currently) a released architecture, we have no stable distro
> > that can be installed on it, which means we have no security support for
> > it; without security support, DSA isn't willing to maintain it, which
> > means they probably aren't going to want to put a "debian.org" name on
> > it, either -- and they certainly won't want to give it privileged access
> > to LDAP.
> > 
> > You could say that "there must be a developer-accessible machine for the
> > architecture" without specifying "debian.org"; but I'm not sure that we
> > should *require* this, either.  Particularly for ports that are waning
> > and are not expected to become RC architectures in the future, I think
> > porters should be free to decide whether to spend the effort on
> > maintaining such a machine since its absence only hurts that port, not
> > the release.

> I am currently in the process of acquiring rotated out of production
> machines for 3 of the 5 architectures I support. I make a run to the
> right-coast of the US once every 2 months and pickup sometimes 10 - 4-16
> processor machines with disk and typically a dozen of GB of memory and
> gaggles of disk. I rebuild/recondition most of these machines and
> distribute them to NPOs that need this kind of horsepower but can't
> afford current stuff or even used stuff from those same suppliers. I put
> Debian on them and this makes a huge investment in the long term health
> of these Orgs.

> If these machines are no longer fully supported by Debian... how can I
> continue to do this.

What does "fully supported" mean to you?  What are the use cases for these
machines?  Which aspects of stable are required by these users?

> How much is the difference between Debian running on "Humidifier in the
> Basement" reputation, and a "We release more often than Ubuntu"
> reputation?

> But, serious, how much do you think Debian will be hurt with:

> Compare these:
> 1. Debian the "Universal OS"
> 2. Debian the "Almost-Sorta-Kinda-used-to-be Universal OS"

> 3. "Old as fossilized dinosaur poo, and as stable, but runs on
> everything including the humidifier in the basement"
> 4. "Very recent, since it doesn't really support NON-big 4
> processors anyway, so why not run Fedora Core"

> Personally, I like 1 and 3. They are the 2nd and 3rd most important
> technical reasons I chose Debian. 1st technical reason is the Debian's
> maintainability. Please oh please let us not change my mind for me.

I'm assuming your humidifier isn't connected to the public Internet, and
doesn't need ongoing security support...?

We're also constantly hearing from users who are using Debian in settings
where they *would* benefit from security support, but are 

Re: Accepted valknut 0.3.7-1 (i386 source)

2005-03-19 Thread Steve Langasek
On Sat, Mar 19, 2005 at 06:34:26AM +0200, Pasi Savilaakso wrote:
> Kirjoitit viestissäsi (lähetysaika lauantai, 19. maaliskuuta 2005 02:53):
> > Hi Pasi,

> > On Friday, 18 Mar 2005, you wrote:
> > > Changes:
> > >  valknut (0.3.7-1) unstable; urgency=high
> > >  .
> > >* New upstream release (Closes: #289643, #269952, #265284, #270096,
> > > #286234)

> > is there any reason for not giving some more explanation, when closing
> > bugs with urgency=high and only listing "New upstream release" as
> > only changelog entry.

> > I would like to have some more explanation for this in the changelog.

> hello Martin,
> There really isn't any more to say. There is nothing else changed in package 
> than new source so I don't really know what else I could say. priority High 
> is since dcgui-qt is totally unusable with new libxml AND if one tries to 
> start dcgui-qt with new libxml it destroys ones dcqui-qt config file. But 
> again nothing changed in package. just recompile against new libxml would 
> remove those unusability issues. oh, one thing I should had said was that I 
> updated man page to match new name, but I forgot it because I made it when I 
> was working with 0.3.3.

Bug #289643 was not a request for packaging the new upstream version: it was
a bug report complaining about the program failing to start.  "New upstream
version" has nothing to do with why this bug was closed.

Bug #269952 was not a request for packaging the new upstream version; it was
a report about broken icons.

Bug #265284 was not a request for packaging the new upstream version; it was
a request to change some strings in the interface, which were changed
upstream.  But "New upstream version" is not why this bug was closed.

Bug #270096 and bug #286234 are requests for the new upstream version.  So
it is appropriate to list them as such.

If you're going to use the upload bug-closing convenience feature, use it
right -- your changelog should have something relevant to say about the bug,
which is *not*, in this case, "New upstream version".


-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: my thoughts on the Vancouver Prospectus

2005-03-19 Thread Bill Allombert
On Sat, Mar 19, 2005 at 09:13:07AM +0100, Karsten Merker wrote:
> On Fri, Mar 18, 2005 at 06:44:46PM -0800, Steve Langasek wrote:
> > [cc:ed back to -devel, since these are technical questions being raised and
> > answered]
> 
> > > * Why is the permitted number of buildds for an architecture restricted to
> > >   2 or 3?
> > 
> > - Architectures which need more than 2 buildds to keep up with package
> >   uploads on an ongoing basis are very slow indeed; while slower,
> >   low-powered chips are indeed useful in certain applications, they are
> >   a) unlikely to be able to usefully run much of the software we currently
> >   expect our ports to build, and b) definitely too slow in terms of
> >   single-package build times to avoid inevitably delaying high-priority
> >   package fixes for RC bugs.
> 
> a) is true for some big packages like GNOME and KDE, but that
> does not impede the architecture's usefulness for other software
> we have in the archive.

Also it is an example of ridiculously large source packages, which
create other problems by themself like the amount of bandwidth wasted
when one has to apply a one-line fix, in particular for security updates.

Why not considering splitting those source packages? IIRC, this is
planned for the X11 source packages. This seems a better option overall.

Cheers,
-- 
Bill. <[EMAIL PROTECTED]>

Imagine a large red swirl here.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Emulated buildds (for SCC architectures)?

2005-03-19 Thread Reinhard Tartler
On 18 Mar 2005 18:58:50 -0800, Thomas Bushnell BSG <[EMAIL PROTECTED]> wrote: 
> > A much faster solution would be to use distcc or scratchbox for
> > crosscompiling.
> 
> Debian packages cannot be reliably built with a cross-compiler,
> because they very frequently need to execute the compiled binaries as
> well as just compile them.

This, and bugs in crosscompiling toolchains prohibit doing this. But
what about emulating a buildd with qemu/basilisk2/... and setting up a
distcc with crosscompiling gcc on the Host? So the
linking/installing/testing/running procedures would run inside the
emulated hardware and only the compiling itself (!) would be done
using a cross compiler.

-- 
regards,
Reinhard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-19 Thread Marco d'Itri
On Mar 19, Thomas Bushnell BSG <[EMAIL PROTECTED]> wrote:

> > > There would definitely be duplication of arch:all between ftp.debian.org
> > > and ports.debian.org (let's call it ports), as well as duplication of the
> > > source.
> > As a mirror operator, I think that this sucks. Badly.
> So don't duplicate ports.  That's the whole point.
I'd still like to support them, on some of my mirrors.

-- 
ciao,
Marco


signature.asc
Description: Digital signature


Re: my thoughts on the Vancouver Prospectus

2005-03-19 Thread Jose Carlos Garcia Sogo
El sÃb, 19-03-2005 a las 04:13 -0600, Bill Allombert escribiÃ:
> On Sat, Mar 19, 2005 at 09:13:07AM +0100, Karsten Merker wrote:
> > On Fri, Mar 18, 2005 at 06:44:46PM -0800, Steve Langasek wrote:
> > > [cc:ed back to -devel, since these are technical questions being raised 
> > > and
> > > answered]
> > 
> > > > * Why is the permitted number of buildds for an architecture restricted 
> > > > to
> > > >   2 or 3?
> > > 
> > > - Architectures which need more than 2 buildds to keep up with package
> > >   uploads on an ongoing basis are very slow indeed; while slower,
> > >   low-powered chips are indeed useful in certain applications, they are
> > >   a) unlikely to be able to usefully run much of the software we currently
> > >   expect our ports to build, and b) definitely too slow in terms of
> > >   single-package build times to avoid inevitably delaying high-priority
> > >   package fixes for RC bugs.
> > 
> > a) is true for some big packages like GNOME and KDE, but that
> > does not impede the architecture's usefulness for other software
> > we have in the archive.
> 
> Also it is an example of ridiculously large source packages, which
> create other problems by themself like the amount of bandwidth wasted
> when one has to apply a one-line fix, in particular for security updates.

 FYI GNOME is not a unique source package but a bunch of source
packages, that can be (and are) upgraded independently. The only point
when almost all them must be uploaded together is when a new release is
made, and even then things don't need to go in a one push upload.

 Cheers,
-- 
Jose Carlos Garcia Sogo
   [EMAIL PROTECTED]


signature.asc
Description: Esta parte del mensaje =?ISO-8859-1?Q?est=E1?= firmada	digitalmente


Re: Buildd redundancy (was Re: Bits (Nybbles?) from the Vancouver...)

2005-03-19 Thread Steve Langasek
On Wed, Mar 16, 2005 at 12:20:34AM -0800, Blars Blarson wrote:
> In article <[EMAIL PROTECTED]> [EMAIL PROTECTED] writes:
> >- the release architecture must have N+1 buildds where N is the number
> >  required to keep up with the volume of uploaded packages

> If we are going to require redundancy, I think we should do it better
> and add:

> - systems located in at least two different facilities (different
>   cities and backbones if at all possible)

> This allows for redundancy in case of fire, flood, earthquake etc.

Yes, this was my expectation with this requirement, and I've confirmed that
others at the meeting had the same thing in mind -- geographic separation is
part of the point of having buildd redundancy.

> - at least two buildd administrators

> This allows the buildd administrator to take vacations, etc.

This is at odds with what I've heard from some buildd maintainers that
having multiple buildd maintainers makes it hard to avoid stepping on one
another's feet, so I wouldn't want to set a requirement like this without
further discussion.  Having multiple *local* admins, OTOH, follows from
having geographic separation of the machines.

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-19 Thread Steve Langasek
On Fri, Mar 18, 2005 at 05:43:26PM +0100, Adrian Bunk wrote:
> On Thu, Mar 17, 2005 at 09:47:42PM -0800, Steve Langasek wrote:
> > On Mon, Mar 14, 2005 at 07:59:43PM +, Alastair McKinstry wrote:
> > > > AFAI can tell, anybody can host an archive of packages built from 
> > > > stable 
> > > > sources for a scc or unofficial port. And - if I read the conditions on 
> > > > becoming a fully supported Debian arch right - then having security 
> > > > support 
> > > > for an external pool of this arch is a good indicator that it should be 
> > > > a 
> > > > fully supported stable release (amongst other things).

> > > The plan as proposed is that the Debian scc ports are purely builds of
> > > unstable. Hence this build out of the last release (e.g. etch) becomes a
> > > subproject of a second-class project of Debian. It effectively has
> > > little credibility.

> > Well, the release team are not the only Debian developers with credibility,
> > surely?  Not everything needs to go through us; if the project has the will
> > to do stable releases of these architectures, in spite of the release team
> > being unwilling to delay other architectures while waiting for them, then
> > it should be very possible to provide full stable releases for these
> > architectures.
> >...

> Which delays are expected for etch, that are not only imposed by the 
> usage of testing for release purposes? [1]

> I do still doubt that testing actually is an improvement compared to the 
> former method of freezing unstable, and even more do I doubt it's worth 
> sacrificing 8 architectures.

If the proposal already gives porters the option to freeze ("snapshot")
unstable to do their own releases, in what sense is this "sacrificing"
architectures?  It sounds to me like it's exactly what you've always wanted,
to eliminate testing from the release process...

> [1] The installer might be a point, but since all sarge architectures
> will have a working installer and I hope there's not another
> installer rewrite planned for etch this shouldn't be a big issue.

Getting the installer into a releasable state across all 11 architectures
simultaneously *is* an ongoing issue, whether it involves a rewrite or not.
So is getting a releasable toolchain, and a releasable kernel; so is getting
buildds on all architectures to attempt to build packages soon enough after
upload to give maintainers timely feedback about brokenness in their
packages.  None of this seems to be imposed by the usage of testing.

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Emulated buildds (for SCC architectures)?

2005-03-19 Thread Peter 'p2' De Schrijver
On Fri, Mar 18, 2005 at 06:58:50PM -0800, Thomas Bushnell BSG wrote:
> Peter 'p2' De Schrijver <[EMAIL PROTECTED]> writes:
> 
> > A much faster solution would be to use distcc or scratchbox for
> > crosscompiling.
> 
> Debian packages cannot be reliably built with a cross-compiler,
> because they very frequently need to execute the compiled binaries as
> well as just compile them.

That's exactly the problem which is solved by using distcc or
scratchbox. distcc basically sends preprocessed source to another
machine and expects an object back. So you run the build on a machine of
the target arch (or an emulator), but the compiler part is actually a
small program which sends the source to the fast machine running the
cross compiler and expects the object code back. 
Scratchbox provides a sandbox on the machine doing the cross compile, in which 
target binaries can be executed by either running them on a target board
sharing the sandbox filesystem using NFS or by running them in qemu.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Emulated buildds (for SCC architectures)?

2005-03-19 Thread Peter 'p2' De Schrijver
> Yes, but the argument against cross-compiling has always been stronger
> - If you are compiling under an emulator, you can at least test the
> produced binaries under that same emulator, and you have a high degree
> of confidence that they work reliably (this is, if an emulator bug
> leads to gcc miscompiling, it'd be surprising if it allowed for
> running under the emulator). Using cross-compilers you can't really
> test it. And, also an important point, you can potentially come up
> with a resulting package you could not generate in the target
> architecture.
> 

You can always run generated binaries on an emulator or a target board
for testing. I have cross compiled a lot of code using gcc and have yet
to see wrong binaries caused by cross compiling versus native compiling.
I could imagine problems with floating point expressions evaluated at
compile time and resulting in slightly different results. 
The only way to see if cross compiling generates wrong binaries, is to
try it and evaluate the results.

> But, yes, I'd accept a cross-compiler as a solution as well in case we
> could not run an emulator for a given slow platform.

We will probably need both as some build scripts run generated code.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: The 98% and N<=2 criteria (was: Vancouver meeting - clarifications)

2005-03-19 Thread David Weinehall
On Fri, Mar 18, 2005 at 11:32:08PM -0800, Steve Langasek wrote:
[snip]
> As pointed out in a recent thread, most of the core hardware portability
> issues are picked up just by building on "the big three" -- i386, powerpc,
> amd64.  If we know the software isn't going to be used, is it actually
> useful to build it as a "QA measure"?  What value is there, in fact, in
> checking for bugs that will only be tripped while building software that
> isn't going to be used?

Because bugs are bugs, no matter if they bite us currently.
That said, I'm a firm believer of the suggestion posed by Jesus
Climent[1], that we should have base set of software (where base is
probably a bit bigger than our current base) released for all
architectures that have a working installer, and then only have full
official releases for a limited set of architectures.

This way, we'd both satisfy people using Debian as a base for
embedded and other customised systems, and most (but not all)
porters.  Of course some people are never satisfied, but then again,
there is no way to solve this that makes everyone happy.


[1] Hopefully, I might remember incorrectly.


Regards: David Weinehall
-- 
 /) David Weinehall <[EMAIL PROTECTED]> /) Northern lights wander  (\
//  ~   //  Dance across the winter sky //
\)  http://www.acc.umu.se/~tao/(/   Full colour fire   (/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Accepted valknut 0.3.7-1 (i386 source)

2005-03-19 Thread Matthias Urlichs
Hi, Pasi Savilaakso wrote:

> There is nothing else changed in
> package than new source so I don't really know what else I could say.

You could say
  * New Upstream release (Closes:#12345)
- No more frobnication (Closes:#23456)
- Fix random typos (Closes: #34567)
- Fix random data loss (urgent) (Closes: #45678)

where #12345 is a "please package the new upstream version" bug and 
#45678 is the one that justifies the urgency.

-- 
Matthias Urlichs   |   {M:U} IT Design @ m-u-it.de   |  [EMAIL PROTECTED]



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Release sarge now, or discuss etch issues? (was: Bits (Nybbles?)from the Vancouver release team meeting)

2005-03-19 Thread Gunnar Wolf
Ola Lundqvist dijo [Tue, Mar 15, 2005 at 09:19:45PM +0100]:
> > And would a larger discussion at debconf'05 not have been more appropriate
> > than handing done a couple of taken decision disguised as proposal ? 
> > 
> > It is not too late for this yet, but there needs to be a real discussion 
> > with
> > real facts, and not just a list of resolution letting 8/11th of the project 
> > in
> > the cold.
> 
> Please take this kind of discussions on debian-devel as it is possible
> for people not attending on debconf be a part of the discussion.

I do believe that Debconf is an ideal place for this - Having 150 of
us together might mean having 40 of us interested in joining this
discussion, brainstorming (and shouting at each other) for ~2hr
instead of over 600 messages, and coming up with something similar to
the Vancouver stuff - a summary of the points reached, not a firm
decision... But a summary with more adherents. And more people
convinced by the release and ftp teams on what and why (or people in
those teams convinced back, or... whatever :) )

Of course, if you cannot make it to Debconf, you will know about the
discussion results. In fact, Debconf plans to capture audio/video of
the sessions at the auditoriums, so you might even participate via
IRC. 

I intended to propose this topic for a round table, but was asked to
wait on this by one of the release members, as they were close to
announcing the Vancouver stuff... Anyway, I am not formally proposing
it, but I do expect it to happen - After all, we will be in HEL ;-)

Greetings,

-- 
Gunnar Wolf - [EMAIL PROTECTED] - (+52-55)1451-2244 / 5554-9450
PGP key 1024D/8BB527AF 2001-10-23
Fingerprint: 0C79 D2D1 2C4E 9CE4 5973  F800 D80E F35A 8BB5 27AF


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: procmail and Large File Support

2005-03-19 Thread Gunnar Wolf
Ola Lundqvist dijo [Wed, Mar 16, 2005 at 09:18:33PM +0100]:
> Hello
> 
> On Fri, Feb 25, 2005 at 07:45:47PM -0600, Ron Johnson wrote:
> > On Sat, 2005-02-26 at 00:53 +0100, Santiago Vila wrote:
> > > Hello.
> > > 
> > > I have several reports saying procmail does not support mbox folders
> > > larger than 2GB. Questions:
> > 
> > OT here, but WTF are people smoking, to have 2GB mbox files?
> 
> Some people tend to have really large inboxes. I have had a number of
> customers that have several GB inbox. They tend to get quite a lot
> of attachments (reports etc) and do not have the time to delete mail.
> It will grow quite fast.

Ummm... And wouldn't it make more sense for them to switch to maildir
instead of mbox? I wouldn't like to search for new mails in there.

-- 
Gunnar Wolf - [EMAIL PROTECTED] - (+52-55)1451-2244 / 5554-9450
PGP key 1024D/8BB527AF 2001-10-23
Fingerprint: 0C79 D2D1 2C4E 9CE4 5973  F800 D80E F35A 8BB5 27AF


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Buildd redundancy (was Re: Bits (Nybbles?) from the Vancouver...)

2005-03-19 Thread Matthias Urlichs
Hi, Steve Langasek wrote:

>> This allows the buildd administrator to take vacations, etc.
> 
> This is at odds with what I've heard from some buildd maintainers that
> having multiple buildd maintainers makes it hard to avoid stepping on one
> another's feet,

I assume that that's a problem if the buildd admins are prone to not
looking where they're going.

TTBOMK, m68k has no such problem.

-- 
Matthias Urlichs   |   {M:U} IT Design @ m-u-it.de   |  [EMAIL PROTECTED]



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Required firewall support

2005-03-19 Thread Matthias Urlichs
Hi, Steve Greenland wrote:

> On 18-Mar-05, 03:28 (CST), Blars Blarson <[EMAIL PROTECTED]> wrote:
>> >Linux fails this. Even with forwarding disabled, it will accept packets
>> >for an address on interface A via interface B.
>> 
>> Enable rp_filter and it does reject such packets.
>> 
>> echo 1 >/proc/sys/net/ipv4/conf/${dev}/rp_filter
> 
> See, that's a nice theory, but it doesn't actually work.

Umm, rp_filter is for rejecting packets whose *source* address is from the
wrong network.

If you want to block accepting your own address as the *destination*, then
no, there's no config parameter for that. Use iptables rules. :-/

-- 
Matthias Urlichs   |   {M:U} IT Design @ m-u-it.de   |  [EMAIL PROTECTED]



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Relaxing testing requirements (was: summarising answers toVancouver critique)

2005-03-19 Thread Gunnar Wolf
martin f krafft dijo [Fri, Mar 18, 2005 at 12:57:54PM +0100]:
> > The security team is under-staffed *now*, AFAICT; and you want to increase
> > their workload for etch on the assumption that nothing bad will come of it?
> 
> No, I said we should stock the security team, which I meant to read
> as: add more man-power.

Why has this not happened yet? This has been a known problem for quite
a long time...

The answer is simple: Not everybody can become a security team member,
the required technical skills are quite high. There is a VERY high
commitment requirement as well, so even some of the skilled people do
not become part of the security team. Besides _that_, most people
agree that creating new code is more fun than patching existing code,
so even less people step into that position.

Remember this is a volunteer project. I know of no extra volunteers
willing to take up such a task as Security. You repeatedly talk about
adding man-power to it. So... Are you in?

Greetings,

-- 
Gunnar Wolf - [EMAIL PROTECTED] - (+52-55)1451-2244 / 5554-9450
PGP key 1024D/8BB527AF 2001-10-23
Fingerprint: 0C79 D2D1 2C4E 9CE4 5973  F800 D80E F35A 8BB5 27AF


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: The 98% and N<=2 criteria (was: Vancouver meeting - clarifications)

2005-03-19 Thread Gunnar Wolf
Steve Langasek dijo [Fri, Mar 18, 2005 at 11:32:08PM -0800]:
> > There are packages we recognize will be of little use in certain
> > architectures - say, KDE on m68k, qemu on a !i386, etc. They should be
> > built anyway on all architectures where expected to run be buildable,
> > anyway, as a QA measure - many subtle bugs appear as the result of
> > architecture-specific quirks.
> 
> > "Architecture: any" means "build anywhere". We could introduce a
> > second header, say, Not-deploy-for: or Not-required-for:. This would
> > mean that KDE _would_ be built for m68k if the buildds are not too
> > busy doing other stuff, and probably would not enter our archive (or
> > would enter a different section - just as we now have contrib and
> > non-free, we could introduce not-useful ;-) )
> 
> As pointed out in a recent thread, most of the core hardware portability
> issues are picked up just by building on "the big three" -- i386, powerpc,
> amd64.  If we know the software isn't going to be used, is it actually
> useful to build it as a "QA measure"?  What value is there, in fact, in
> checking for bugs that will only be tripped while building software that
> isn't going to be used?

As you say, _most_ of the issues are triggered by one of those three
chips, not all. And, by not making a hard requirement to compile the
packages which will not be used, you are not holding the project back
waiting for m68k's KDE. Probably m68k will _never_ compile KDE, as I
doubt their buildds are ever idle - But what do you prefer, say, for
our ia64 buildd, to just sit there waiting for a new package to
arrive, or to start compiling something that will be useful only for
QA, and only probably?

Greetings,

-- 
Gunnar Wolf - [EMAIL PROTECTED] - (+52-55)1451-2244 / 5554-9450
PGP key 1024D/8BB527AF 2001-10-23
Fingerprint: 0C79 D2D1 2C4E 9CE4 5973  F800 D80E F35A 8BB5 27AF


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Bug#300409: ITP: gruler -- a customizable screen ruler for GNOME

2005-03-19 Thread Maykel Moya
Package: wnpp
Severity: wishlist
Owner: Maykel Moya <[EMAIL PROTECTED]>

* Package name: gruler
  Version : 0.6
  Upstream Author : Ian McIntosh <[EMAIL PROTECTED]>
* URL : http://linuxadvocate.org/projects/gruler
* License : GPL
  Description : a customizable screen ruler for GNOME

gruler is an on-screen ruler for measuring horizontal
and vertical distances in any application.

-- System Information:
Debian Release: 3.1
  APT prefers testing
  APT policy: (500, 'testing')
Architecture: i386 (i686)
Kernel: Linux 2.6.11-1
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Bug#300406: ITP: ruby-zoom -- Ruby ZOOM API for the Z39.50 book information retrieval protocol

2005-03-19 Thread Dafydd Harries
Package: wnpp
Severity: wishlist
Owner: Dafydd Harries <[EMAIL PROTECTED]>

* Package name: ruby-zoom
  Version : 0.1.0
  Upstream Author : Laurent Sansonetti <[EMAIL PROTECTED]>
* URL : http://ruby-zoom.rubyforge.org/
* License : LGPL
  Description : Ruby ZOOM API for the Z39.50 book information retrieval 
protocol

Ruby/ZOOM provides a Ruby binding to the Z39.50 Object-Orientation Model
(ZOOM), an abstract object-oriented programming interface to a subset of the
services specified by the Z39.50 standard, also known as the international
standard ISO 23950.

-- System Information:
Debian Release: 3.1
  APT prefers unstable
  APT policy: (500, 'unstable'), (101, 'experimental')
Architecture: i386 (i686)
Kernel: Linux 2.6.10-rc3-3
Locale: LANG=cy_GB.UTF-8, LC_CTYPE=cy_GB.UTF-8 (charmap=UTF-8)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-19 Thread Daniel Kobras
On Sat, Mar 19, 2005 at 01:21:15AM +0100, Marco d'Itri wrote:
> On Mar 18, Steve Langasek <[EMAIL PROTECTED]> wrote:
> 
> > There would definitely be duplication of arch:all between ftp.debian.org
> > and ports.debian.org (let's call it ports), as well as duplication of the
> > source.
> As a mirror operator, I think that this sucks. Badly.

What's wrong with splitting into ftp-full-monty.d.o, carrying all archs,
including the popular ones, and ftp.d.o, carrying only the most popular
subset? This way, there's no need to mirror from both of them, and
duplication is kept to a minimum. Slightly increased traffic from the
fullblown server is the only drawback I see compared to the ports
proposal.

Regards,

Daniel.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-19 Thread Thiemo Seufer
Anthony Towns wrote:
[snip]
> So, I'd just like to re-emphasise this, because I still haven't seen 
> anything that counts as useful. I'm thinking something like "We use s390 
> to host 6231 scientific users on Debian in a manner compatible to the 
> workstations they use; the software we use is ; we rely on having 
> security support from Debian because we need to be on the interweb 2; 
> ...". At the moment, the only use cases I'm confident exist are:
> 
>   m68k, mips, mipsel, hppa: I've got one in the basement, and I like 
>   to brag that I run Debian on it; also I occassionally get some work out 
> of 
> it, but it'd be trivial to replace with i386.

Well, for mips/mipsel, this covers only the machines which aren't that
relevant nowadays. Mips is one of the most numerous 32/64bit architectures
currently in use, but most of the time it is hidden in things which aren't
commonly recognized as computers.

http://www.mips.com/

A significant portion of mips/mipsel usage is geared towards networked
devices. Note that MIPS, Inc. does not manufacuture devices or CPUs,
they sell CPU designs.

http://www.mips.com/content/Corporate/AboutUs/content_html#mips

While it would be fun to run Debian on your digital camera with a large
flashcard, I don't see a real use for it, so I will restrict the
following list to Products where a stable debian distribution can be
useful.

- Cheap/Lean/Silent desktop computer
  http://www.pmc-sierra.com/xiaohu/

- Digital TV/Media Player/Video Recorder
  http://www.ati.com/products/settopwonderxilleon/
  http://www.semicon.toshiba.co.jp/eng/prd/micro/prd_inf/tx49.html

  Those will commonly have USB/Ethernet, and in some cases storage,
  which means they can be used as X-termial/Network Client/Lightweight
  standalone machine.

- WAP/Small router/DSL Modem
  http://meshcube.org/index_e.html
  http://mycable.de/xxs1500/

  Devices in this class had ~8 MB RAM two years ago and could at best
  run a customized linux. Today they have ~32 MB, can run Debian, but
  it is still beneficial to use a customized version
  (http://sourceforge.net/projects/picodebian). They are likely to grow
  further in the near future.

  The main hindrance for Debian on these systems is the lack of
  available storage. The size of cheap flash memory grows much faster
  than the debian base installation, so this problem will go away over
  time as well.

- High-end prototype boards
  http://sibyte.broadcom.com/public/boards/index.html

  Those boards are used by vendors to evaluate/test the CPU they have
  chosen for their product (Cisco and NetApp are well-known names).

  Some Debian developers have such boards. While they aren't easy to
  come by, and the userbase outside Debian is small, they are valuable
  because of their speed.

[snip]
>   arm: We're developing some embedded boxes, that won't run Debian 
> proper, but it's really convenient to have Debian there to bootstrap 
> them trivially.

ARM is roughly in the same situation like the slower MIPS CPUs, but
with a large percentage of it used in mobile devices. With mobile
networking on the rise Debian gets more interesting there.

>   s390: Hey, it's got spare cycles, why not?

AFAIH there are some serious Debian users on s390, but they don't talk
about it publicly.

[snip]
> Knowing why you're 
> using Debian and not another distribution or OS would be interesting too.

Outside the non-free/commercial realm, there are three choices for
mips/mipsel:

 - NetBSD, with a small core of software, limited (build-)testing
   outside that, and reproducibility problems caused by the
   source-centric approach

- Gentoo, with all of the above, linux based, and still tagged as
  experimental, which means experimental by Gentoo standards

- Debian


Thiemo


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-19 Thread Marco d'Itri
On Mar 19, Daniel Kobras <[EMAIL PROTECTED]> wrote:

> What's wrong with splitting into ftp-full-monty.d.o, carrying all archs,
> including the popular ones, and ftp.d.o, carrying only the most popular
> subset? This way, there's no need to mirror from both of them, and
> duplication is kept to a minimum. Slightly increased traffic from the
> fullblown server is the only drawback I see compared to the ports
> proposal.
That on some servers I'd like to mirror both archives, and I'd rather
not waste a few GB on duplicated files.

-- 
ciao,
Marco


signature.asc
Description: Digital signature


Security work in Debian (Was: Relaxing testing requirements)

2005-03-19 Thread Petter Reinholdtsen
[Gunnar Wolf]
> The answer is simple:

For every problem there is a simple and obvious answer which just
happen to be wrong.  I believe you ran into one of those.  :)

> Not everybody can become a security team member, the required
> technical skills are quite high. There is a VERY high commitment
> requirement as well, so even some of the skilled people do not
> become part of the security team. Besides _that_, most people agree
> that creating new code is more fun than patching existing code, so
> even less people step into that position.
> 
> Remember this is a volunteer project. I know of no extra volunteers
> willing to take up such a task as Security. You repeatedly talk
> about adding man-power to it. So... Are you in?

There are two security teams in effect now.  The debian/stable team,
working to make sure the stable release of debian get security fixes
as soon as possible.  They get security warnings before the issues
become public knowledge.  Membership into this team is not over for
everyone.

There is also the debian/testing team, working to fix security issues
in the testing release of debian.  This team only work with publicly
known information, and is open for everyone interested in helping out
with security fixes for Debian.  This second team was created by Joey
Hess as part of his work for Debian Edu, and there are several
volunteers participating in this effort.  To participate, check out
http://secure-testing.alioth.debian.org/>.  Debian Edu are trying
to find funding to hire more people to work on security in Debian.
Contact me if you are interested in funding this work. :)

I hope in time the "public" debian/testing security team can become a
good recruitment base for the "private" debian/stable security team.
This will hopefully let us avoid the current problem with the lack of
man-power in the debian/stable security team.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



orphaning packages

2005-03-19 Thread Sergio Rua
Hello,

My GPG was compromissed before Xmas and since then, I was unable to get
a new key. Two of my packages are getting full of bugs which I can fix and
close so I decided to orphan them and if I'm be able to get new
key in the future, I'll find new packages to mantain.

They are:

openwebmail
partimage

If somebody is interested on them, please drop me an email. Otherwise,
I'll submit a bug against wnpp early next week for a proper orphaning
procedure.
--
Sergio

It were not best that we should all think alike; it is difference of opinion
that makes horse-races.
-- Mark Twain, "Pudd'nhead Wilson's Calendar"


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: orphaning packages

2005-03-19 Thread Laszlo Boszormenyi
Hi,

On Sat, 2005-03-19 at 18:55 +0100, Sergio Rua wrote:
> My GPG was compromissed before Xmas and since then, I was unable to get
> a new key.
 Bad thing. :( Hope you will get a new one soon.

>  Two of my packages are getting full of bugs which I can fix and
> close so I decided to orphan them and if I'm be able to get new
> key in the future, I'll find new packages to mantain.
 What about someone (maybe me) sponsoring your upload? Fix the bugs,
put the new package online and send me where can I get it. Then
after a check, I will upload it for you. So you won't lose the
packages.

Regards,
Laszlo


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-19 Thread Henning Makholm
Scripsit Daniel Kobras <[EMAIL PROTECTED]>
> On Sat, Mar 19, 2005 at 01:21:15AM +0100, Marco d'Itri wrote:
>> On Mar 18, Steve Langasek <[EMAIL PROTECTED]> wrote:

>> > There would definitely be duplication of arch:all between ftp.debian.org
>> > and ports.debian.org (let's call it ports), as well as duplication of the
>> > source.

>> As a mirror operator, I think that this sucks. Badly.

> What's wrong with splitting into ftp-full-monty.d.o, carrying all archs,
> including the popular ones, and ftp.d.o, carrying only the most popular
> subset?

It should be possible to mirror the whole shebang without duplicating
content. This means that there ought to be a way for a mirror to use a
common pool directory for source and _all packages, even if one also
mirrors IRREGULAR architectures with their own testing-like suites
that do not track the same versions as the REGULAR one.

Such functionality is, however, beyond what our current archive
scripts are capable of, so it won't happen unless somebody writes
and tests the code to achieve it.

I think it is fair enough for the ftpmasters and release team to
declare: "we are not going to develop that code". Unfortunately
the Vancouver text does not distinguish clearly between

  1. We are not going to do this. Others may, if they care enough.
  2. This will not get done.
  3. We're proposing rules which say nobody must do this.

which leads to misunderstandings [1]. I assume, however, that (1) is
what is really meant.

[1] See e.g. my own incessant whining about "unstable-only" earlier
this week (with an outlying relapse yesterday - sorry, Anthony).

-- 
Henning Makholm  "Panic. Alarm. Incredulity.
   *Thing* has not enough legs. Topple walk.
  Fall over not. Why why why? What *is* it?"


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-19 Thread Henning Makholm
Scripsit Anthony Towns 
> Henning Makholm wrote:

>> The question is whether the *porters* think they have a sufficiently
>> good reason to do the work of maintaining a separate testing-esque
>> suite. If the porters want to do the work they should be allowed to do
>> it.

> If they don't need any support from anyone else, they're welcome to do
> whatever they like.

My apologies. I was just whining, long after I ought to have concluded
that I was whining from false premises.

-- 
Henning Makholm   "The great secret, known to internists and
 learned early in marriage by internists' wives, but
   still hidden from the general public, is that most things get
 better by themselves. Most things, in fact, are better by morning."


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: automake/autoconf in build-dependencies

2005-03-19 Thread Henning Makholm
Scripsit Junichi Uekawa <[EMAIL PROTECTED]>

>> > To a certain degree, those would have been fixed if people
>> > build-depended on auto*, as they would have picked up fixed versions
>> > of the .m4 files.

>> But that has to be offset against the huge number of bugs that would
>> occur if we ran auto* at run time and had everything break everytime
>> the target moves.

> That kind of bug will appear when a user tries to modify
> other parts of the auto* files, and regenerate them.

Better have them restricted to developers and users who modify code
than to have them happen randomly to people who just want to build the
unmodified package.

-- 
Henning Makholm "However, the fact that the utterance by
   Epimenides of that false sentence could imply the
   existence of some Cretan who is not a liar is rather unsettling."


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: procmail and Large File Support

2005-03-19 Thread Henning Makholm
Scripsit Gunnar Wolf <[EMAIL PROTECTED]>

>> Some people tend to have really large inboxes. I have had a number of
>> customers that have several GB inbox. They tend to get quite a lot
>> of attachments (reports etc) and do not have the time to delete mail.
>> It will grow quite fast.

> Ummm... And wouldn't it make more sense for them to switch to maildir
> instead of mbox? I wouldn't like to search for new mails in there.

Woah... deja vu.

-- 
Henning Makholm"I ... I have to return some videos."


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



where to look to understand the big picture (was: Question for candidate Towns)

2005-03-19 Thread martin f krafft
I am taking this to -devel. Please remove -vote from all replies.

... and sorry for the late reply.

also sprach Martin Schulze <[EMAIL PROTECTED]> [2005.03.14.0826 +0100]:
> When the code is public, rtfm is the proper answer.

This answer seems logical to you and I. It is, however, not the
didactically correct answer for everyone.

> One might add "document it properly afterwards" as well, though.

Yes, or else we'll be in ten years where we are right now. :/

> Some data cannot be made available for legal or other binding
> obligations (new queue, security archive).

For instance, where would I go to obtain the meta-information about
this? I learnt about the NEW queue write-only restriction on IRC per
chance. Is this (LFAQ item) documented somewhere publicly?

> If you feel that some bits are missing and need to be documented
> better, point them out and get them documented better, maybe by
> doing it on your own.

Of course. I am doing so, if not only by writing (well, having
written) an exhaustive reference book on Debian.

However, I should point out again that this is not about me, and
that many people are (a) either not willing to document, or (b) are
too daunted by the complexity of our project and thus don't even
know where to start.

> > No, I do not have (nor do I want to present) a single example
> > for you, Joey. I am sure that you will dissect just about
> > anything I write. All the better if there is an easy way to find
> > out
> 
> I hope to be able to, but I cannot guarantee that I am.  I believe
> that most parts of the project are either documented or publically
> available in source form so that all developers can educate
> themselves.

The question is whether it's easily or readily accessible. I believe
that source code is *not* the right medium for everyone wanting to
get involved. You may disagree with me (we are not here to reach an
agreement on this), but let it be said that we often tell people
that coding is not required to contribute to Debian, that there are
many other areas where help is needed... the problem is that there's
quite a dampening effect on motivation when one does not know the
project for which one is working (sorry for the awkward wording).

Another, related issue is that the infrastructure itself may be
documented or available for scrutiny, but its use or status are
obscure(d). Bas gives a good example in his recent email to
debian-devel[0].

0. [EMAIL PROTECTED]

> > everything about the project. It just does not help much if
> > every aspect is documented in a different place, or using
> > a different paradigm.
> 
> Then try to unite the documentation instead of blindly bashing and
> whining.

http://debianbook.info

The book is still targeted at users and does not include all the
information relevant to developers, but a bunch of stuff is still
included. Moreover, depending on its success, I may followup with
a developers' book.

> > What you fail to see is that there is something daunting about
> > a project of this size and complexity to those who are trying to
> > understand it top-down, rather than having been part of building
> > it bottom-up.
> 
> What you fail to see is that the bits are available and that you
> "only" have to build the large picture.  If you're too lazy to do
> so, it's not the job of the people working on essential corners of
> the project to educate every random Johnny Sixpack for the sake of
> it.

Of course I agree with you. However, "RTFM" or "UTSL" is not the
answer to every question.

Let's go hypothetical. If you will, maybe you can show me where
I would find answers to the following questions:

  1. I have been a system administrator for years, with experience
 in both, multi-user support as well as critical infrastructure
 maintenance. I would like to become a Debian System
 Administrator but cannot contribute any machines at the moment.
 What do I do?

  2. I want to become a member of the security team. I screen
 full-disclosure and other mailing lists and try to take a stab
 at reproducing and/or fixing problems whenever I find the time.
 In the past, it has happened that messages I forwarded to the
 security team have been politely acknowledged as "yeah, we know
 about this already", and patches I submitted have been rejected
 because other fixes were already in place. I realise that the
 security team must maintain a certain level of secrecy, but how
 am I supposed to contribute when I am excluded? I have been
 told about a database for security issues, but so far there
 seems to be no such thing. I would help with this issue, but
 I do not have access to the information.

I could probably conceive other examples, but that's not the point.
I see Debian as a meritocracy, and the way to receive privileges is
to contribute and be pro-active. However, it cannot be the goal to
expect from willing users to figure out everything about a job all
by themselves prior 

Re: The 98% and N<=2 criteria

2005-03-19 Thread Henning Makholm
Scripsit David Weinehall <[EMAIL PROTECTED]>

> That said, I'm a firm believer of the suggestion posed by Jesus
> Climent[1], that we should have base set of software (where base is
> probably a bit bigger than our current base) released for all
> architectures that have a working installer, and then only have full
> official releases for a limited set of architectures.

Such a base set of software would surely include a compiler toolchain,
wouldn't it? If sounds plausible that the toolchain is the collection
of software where architecture-specific bugs are _most_ likely to turn
up, so would we actually have gained anything then?

-- 
Henning Makholm   "og de står om nissen Teddy Ring."



Re: The 98% and N<=2 criteria

2005-03-19 Thread Thiemo Seufer
Henning Makholm wrote:
> Scripsit David Weinehall <[EMAIL PROTECTED]>
> 
> > That said, I'm a firm believer of the suggestion posed by Jesus
> > Climent[1], that we should have base set of software (where base is
> > probably a bit bigger than our current base) released for all
> > architectures that have a working installer, and then only have full
> > official releases for a limited set of architectures.
> 
> Such a base set of software would surely include a compiler toolchain,
> wouldn't it? If sounds plausible that the toolchain is the collection
> of software where architecture-specific bugs are _most_ likely to turn
> up, so would we actually have gained anything then?

Toolchain bugs which affect the toolchain itself are usually quickly
resolved, once a port is somewhat stable. If this isn't the case, then
the port can't even stay in unstable for long.


Thiemo


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Bug#300455: ITP: gwp -- GNOME War Pad (GWP) is a 'VGA Planets' strategy game client for GNOME.

2005-03-19 Thread Lucas Di Pentima
Package: wnpp
Severity: wishlist
Owner: Lucas Di Pentima <[EMAIL PROTECTED]>


* Package name: gwp
  Version : 0.3.6
  Upstream Author : Lucas Di Pentima <[EMAIL PROTECTED]>
* URL : http://gwp.lunix.com.ar
* License : GPL
  Description : GNOME War Pad (GWP) is a 'VGA Planets' strategy game client 
for GNOME.

 GNOME War Pad is a VGA Planets client written in C for the GNOME
 desktop environment. It's design is very 'starchart centric',
 giving a global view of the game at all times.
 VGA Planets is a space conquest, play-by-email, strategy game
 that can be played by 11 players simultaneously, it's been
 played by strategy fanatics since the FidoNet times until now.

-- System Information:
Debian Release: 3.1
  APT prefers testing
  APT policy: (500, 'testing')
Architecture: i386 (i686)
Kernel: Linux 2.4.27-1-686
Locale: LANG=es_ES, LC_CTYPE=es_ES (charmap=ISO-8859-1) (ignored: LC_ALL set to 
es_ES)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-19 Thread Adrian Bunk
On Sat, Mar 19, 2005 at 04:19:03AM -0800, Steve Langasek wrote:
> 
> > Which delays are expected for etch, that are not only imposed by the 
> > usage of testing for release purposes? [1]
> 
> > I do still doubt that testing actually is an improvement compared to the 
> > former method of freezing unstable, and even more do I doubt it's worth 
> > sacrificing 8 architectures.
> 
> If the proposal already gives porters the option to freeze ("snapshot")
> unstable to do their own releases, in what sense is this "sacrificing"
> architectures?

Well, producing a release from a "snapshot" of unstable in a way that's 
at least roughly comparable with current stable releases on this 
architecture are:
- release management
- security support

Freezing for one architecture from some snapshot of unstable is roughly 
comparable to a complete release process you as a release team are 
doing. But it's worse. One email from you to d-d-a "We will freeze in 
two months, please don't do big changes and stabilize your packages 
instead." results in an unstable (and therefore a testing) that's in a 
relatively good shape at the start of the freeze [1]. And after the 
start of the freeze, most maintainers will work on stabilizing the 
frozen packages instead of putting new versions into unstable. Yes, this 
doesn't work 100%, but it still distributes a serious amount of work 
from the release team to all maintainers. How much of this work will be 
distributed away from the porters if they announce: "The m68k team will 
release based on an unstable snapshot taken two months from now."?

And assuming you want to use such port specific releases on computers 
that have more than one user and/or that are connected to any network, 
you need security support. Joey (who should know best) already explained 
that for the security team it doesn't matter whether much they have to 
support 2 or 20 architectures within a release, but every additional 
distribution causes a lot of extra work for them. Whom do you want to 
put the burden for security releases of the 8 sarge architectures you 
expect to not release with etch on? Should the security team carry this 
burden, or should every porter team form their own security team 
releasing their own DSA's?

That's so much extra work for every single port that makes it nearly 
impossible for architectures to achieve what they get if they are in the 
regular release process that it's nearly impossible.

> It sounds to me like it's exactly what you've always wanted,
> to eliminate testing from the release process...
>...

Yes, I'm not a fan of testing.

But I do understand how testing works.

Both my previous emails in these threads and this emails aren't emails 
simply "testing bashing" emails without real contents. I'm trying to 
point at the weaknesses of a release process with testing - like every 
other release process, it has both advantages and disadvantages.

Testing has it's advantages.
You know that all packages have there dependencies fulfilled [2], were 
built on all architectures, and some kinds of bugs are less likely to 
make it into testing.

There were many release updates the release team sent that mentioned as 
a major success that transition A is now finally into testing and it was 
hoped that transition B will go into testing soon. How many weeks did it 
took altogether that were spent getting transitions into testing that 
were already completed in unstable? And how many hours have members of 
the release team spent for hints, coordinating between maintainers etc. 
for getting these transitions into testing? These are extra costs of 
testing.

And you explained in your announcement that removing approx. 8 of the 
current architectures from the release process "... will drastically 
reduce the architecture coordination required in testing, giving us a 
more limber release process and (it is hoped) a much shorter release 
cycle on the order of 12-18 months."

IOW:
You are saying the current release process with testing has problems 
that make it impossible to achieve a release cycle on the order of 12-18 
months with many architectures.

One possible solution for this problem you observed is reducing the 
number of architectures.

Another possible solution for this problem is to switch to a release 
process without testing.

cu
Adrian

[1] This was more effective if freeze announcements weren't sent 6 days
before the freeze, but it's your choice as release manager to not
take the full advantages of early announcements.
[2] but build dependencies are still not tracked

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: procmail and Large File Support

2005-03-19 Thread Ron Johnson
On Sat, 2005-03-19 at 09:54 -0600, Gunnar Wolf wrote:
> Ola Lundqvist dijo [Wed, Mar 16, 2005 at 09:18:33PM +0100]:
> > Hello
> > 
> > On Fri, Feb 25, 2005 at 07:45:47PM -0600, Ron Johnson wrote:
> > > On Sat, 2005-02-26 at 00:53 +0100, Santiago Vila wrote:
> > > > Hello.
> > > > 
> > > > I have several reports saying procmail does not support mbox folders
> > > > larger than 2GB. Questions:
> > > 
> > > OT here, but WTF are people smoking, to have 2GB mbox files?
> > 
> > Some people tend to have really large inboxes. I have had a number of
> > customers that have several GB inbox. They tend to get quite a lot
> > of attachments (reports etc) and do not have the time to delete mail.
> > It will grow quite fast.
> 
> Ummm... And wouldn't it make more sense for them to switch to maildir
> instead of mbox? I wouldn't like to search for new mails in there.

But, but, but, we *love* using 6GB mbox files !

-- 
-
Ron Johnson, Jr.
Jefferson, LA USA
PGP Key ID 8834C06B I prefer encrypted mail.

"No drug, not even alcohol, causes the fundamental ills of
society. If we're looking for the sources of our troubles, we
shouldn't test people for drugs, we should test them for
stupidity, ignorance, greed and love of power."
P. J. O'Rourke



signature.asc
Description: This is a digitally signed message part


Re: Buildd redundancy (was Re: Bits (Nybbles?) from the Vancouver...)

2005-03-19 Thread Steve Langasek
On Sat, Mar 19, 2005 at 04:37:05PM +0100, Matthias Urlichs wrote:
> Hi, Steve Langasek wrote:

> >> This allows the buildd administrator to take vacations, etc.

> > This is at odds with what I've heard from some buildd maintainers that
> > having multiple buildd maintainers makes it hard to avoid stepping on one
> > another's feet,

> I assume that that's a problem if the buildd admins are prone to not
> looking where they're going.

> TTBOMK, m68k has no such problem.

TTBOMK, even m68k has one buildd admin per buildd -- the most they generalley
have in terms of buildd admin redundancy is that if the admin for a machine
that has built a certain package is unavailable, another admin can waste
cycles by re-building the package elsewhere.

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Security work in Debian (Was: Relaxing testing requirements)

2005-03-19 Thread Javier Fernández-Sanguino Peña
On Sat, Mar 19, 2005 at 07:03:07PM +0100, Petter Reinholdtsen wrote:
> There are two security teams in effect now.  The debian/stable team,
> working to make sure the stable release of debian get security fixes
> as soon as possible.  They get security warnings before the issues
> become public knowledge.  Membership into this team is not over for
> everyone.

Actually there's three. You are missing the Security Audit team which both 
finds new vulnerabilities and issues and finds vulnerabilities that have 
been fixed by others but have not yet been fixed in Debian. Notice that 
those "fixed" things don't always have a CVE identifier...

Regards

Javier


signature.asc
Description: Digital signature


Re: Buildd redundancy (was Re: Bits (Nybbles?) from the Vancouver...)

2005-03-19 Thread Matthias Urlichs
Hi, Steve Langasek wrote:

>> TTBOMK, m68k has no such problem.
> 
> TTBOMK, even m68k has one buildd admin per buildd -- the most they
> generalley have in terms of buildd admin redundancy is that if the admin
> for a machine that has built a certain package is unavailable, another
> admin can waste cycles by re-building the package elsewhere.

Umm, no. If I vanish, one of the other two people who can log onto my
buildds can mail the logs to themselves and sign them.

-- 
Matthias Urlichs   |   {M:U} IT Design @ m-u-it.de   |  [EMAIL PROTECTED]



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-19 Thread Matthias Urlichs
Hi, Marco d'Itri wrote:

> On Mar 19, Daniel Kobras <[EMAIL PROTECTED]> wrote:
> 
>> What's wrong with splitting into ftp-full-monty.d.o, carrying all archs,
>> including the popular ones, and ftp.d.o, carrying only the most popular
>> subset? This way, there's no need to mirror from both of them, and
>> duplication is kept to a minimum. Slightly increased traffic from the
>> fullblown server is the only drawback I see compared to the ports
>> proposal.
> That on some servers I'd like to mirror both archives, and I'd rather not
> waste a few GB on duplicated files.

This may be a stupid question, but if you already mirror full-monty, what
would you gain by also mirroring ftp.d.o on the same server?

But: if you insist: since filenames of the one are a subset of the other,
this sequence would save you from storing or downloading ftp.d.o twice:
- rsync ftp.d.o
- cp -rlu ftp/pool/* fullmonty/pool
- rsync fullmonty

-- 
Matthias Urlichs   |   {M:U} IT Design @ m-u-it.de   |  [EMAIL PROTECTED]



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-19 Thread Darren Salt
I demand that Anthony Towns may or may not have written...

> Michael K. Edwards wrote:
[snip]
>> I think Sarge on ARM has the potential to greatly reduce the learning
>> curve for some kinds of embedded development, especially if Iyonix
>> succeeds in its niche (long live the Acorn!).

> So, I looked at the website, but all I can see are expensive PCs that
> happen to have an arm chip.

FWIW, they're not the only ARM-based desktop boxes which are currently
available, although I'm not sure about the situation wrt Linux.

> Put them behind a firewall on a trusted LAN, use them to develop software
> for arm chips, and then just follow unstable or run non-security-supported
> snapshots. Apart from writing software for embedded arm things, I can't see
> the value

"Linux desktop box" comes to mind...

> -- and if an arch is just going to be used for development, does it really
> need all the support we give stable in order to make it useful for servers
> and such?

Probably not, but ISTM that you'll first have to ascertain that it *is* only
being used for development before you can say that that support definitely
isn't needed.

> If so, why? If not, what level of support does it need, that goes beyond
> "unstable + snapshotting facility", and why? Debian developers [...]

You're focusing too much on development here. There are users too, you
know... :-)

[snip]
> I guess this is really the wrong place to ask for "we use these machines"
> answers instead of "we develop for these machines", but hey.

I don't think that there's any need to *guess*... ;-)

-- 
| Darren Salt   | linux (or ds) at | nr. Ashington,
| woody, sarge, | youmustbejoking  | Northumberland
| RISC OS   | demon co uk  | Toon Army
|   http://www.youmustbejoking.demon.co.uk/progs.linux.html>

You will spend the rest of your life in the future.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Do not make gratuitous source uploads just to provoke the buildds!

2005-03-19 Thread Steve Langasek
On Tue, Mar 15, 2005 at 09:56:10AM +0100, Goswin von Brederlow wrote:

> >> I would like to see some stats showing on how many days in the last
> >> year an arch reached 0 needs-build. I highly doubt that any arch
> >> managed to do it every day troughout the last year.

> > You know why goals are important? 0 needs-build is definitly a goal we
> > should work to.

> I disagree. 0 needs-build once a day is a bad line to draw. Saying
> packages must be build a day before they become testing candidates
> would be a better line. But that would require a non starving queue to
> mean anything but 0 needs-build.

> I don't see any great harm with packages getting build 5 days late if
> they have to wait 10 days for testing. As long as they do get build on
> time.

Ah, but packages that are being uploaded with urgency=low are usually not a
concern for the release team at all; it's the *high*-urgency uploads that
normally demand the release team's attention, and these are precisely the
ones that slower build architectures are going to have a harder time
building within the purgatory interval (2 days, not 10, for high-urgency
packages).  To say nothing of the fact that urgency is not currently
considered for package build ordering, which means that it's very possible
for a high-urgency upload to go for 2 days without being tried at all.

> But what do I know, I'm not an RM. So lets thing about the criterion:

> Strictly requiring 0 needs-builds every day means the buildd must have
> enough power to cope even with huge upload peeks and if one of the
> buildds fail at a peak time no arch will cope with that. Obviously
> some leaway will have to be given for arch to temporarily not meat the
> criterion, say 0 needs-build on 75% of all days and no more than 3
> consecuitve failures wihtout special circumstances or something of
> that sort. Right?

> Or do you realy want to remove i386 from the release if it fails 0
> needs-build 10 times before etch release?

Andreas never said anything about this being the criterion for RC
architectures.  He said it should be a *goal*.

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-19 Thread Henning Makholm
Scripsit Matthias Urlichs <[EMAIL PROTECTED]>
> Hi, Marco d'Itri wrote:

>> That on some servers I'd like to mirror both archives, and I'd rather not
>> waste a few GB on duplicated files.

> This may be a stupid question, but if you already mirror full-monty, what
> would you gain by also mirroring ftp.d.o on the same server?

As far as I understand, this was in the context of an IRREGULAR
architecture making a "stable" release which contained package
versions that are not present in the official archive. Then the
special repository with the arch-specific stable would neither be a
subset of ftp.d.o nor vice versa; yet they could still have much
source in common.

-- 
Henning Makholm   "Jeg skrællet har kartofler; min ene tommeltot
  røg vistnok med i gryden. Jeg har det ellers got."



Re: my thoughts on the Vancouver Prospectus

2005-03-19 Thread Peter 'p2' De Schrijver
> > * Why is the permitted number of buildds for an architecture restricted to
> >   2 or 3?
> 
> - Architectures which need more than 2 buildds to keep up with package
>   uploads on an ongoing basis are very slow indeed; while slower,
>   low-powered chips are indeed useful in certain applications, they are
>   a) unlikely to be able to usefully run much of the software we currently
>   expect our ports to build, and b) definitely too slow in terms of

You're sprouting non-sense here. The vast majority of the debian
packages is useful on slower architectures.

>   single-package build times to avoid inevitably delaying high-priority
>   package fixes for RC bugs.
> 

> - If an architecture requires more than 3 buildds to be on-line to keep up
>   with packages, we are accordingly spreading thin our trust network for
>   binary packages.  I'm sure I'll get flamed for even mentioning it, but
>   one concrete example of this is that the m68k port, today, is partially
>   dependent on build daemons maintained by individuals who have chosen not
>   to go through Debian's New Maintainer process.  Whether or not these
>   particular individuals should be trusted, the truth is that when you have
>   to have 10 buildds running to keep up with unstable, it's very difficult
>   to get a big-picture view of the security of your binary uploads.
>   Security is only as strong as the weakest link.
> 

We now rely on about 1000 developers which can upload binary packages
for any arch and they do not get rebuild by the buildd's. thanks for
playing.

> - While neither of the above concerns is overriding on its own (the
>   ftpmasters have obviously allowed these ports to persist on
>   ftp-master.debian.org, and they will be released with sarge), there is a
>   general feeling that twelve architectures is too many to try to keep in
>   sync for a release without resulting in severe schedule slippage.
>   Pre-sarge, I don't think it's possible to quantify "slippage that's
>   preventible by having more active porter teams" vs. "slippage that's
>   due to unavoidable overhead"; but if we do need to reduce our count of
>   release archs, and I believe we do, then all other things being equal, we
>   should take issues like the above into consideration.
> 

Would you please stop generalizing your opinions ? There is an idea in
some people's mind that 12 architectures is too much. If you look at the
number of reactions on this list, you will notice that a lot of people
do not agree with you on this point. So stop inventing bogus arguments
to justify your point.

> > * Three bodies (Security, System Administration, Release) are given
> >   independent veto power over the inclusion of an architecture.
> >   A) Does the entire team have to exercise this veto for it to be
> >  effective, or can one member of any team exercise this power
> >  effectively?
> 
> It's expected that each team would exercise that veto as a *team*, by
> reaching a consensus internally.
> 

This is obviously unacceptable. Why would a small number of people be
allowed to veto inclusion of other people's work ?

> >   B) Is the availability of an able and willing Debian Developer to join
> >  one of these teams for the express purpose of caring for a given
> >  architecture expected to mitigate concerns that would otherwise lead
> >  to a veto?
> 
> Without knowing beforehand what the reason for the veto would be (and if we
> knew, we would list them explicitly as requirements), this isn't possible to
> answer.
> 

So drop this bullshit veto thing. There is no reason to have this.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-19 Thread Anthony Towns
Darren Salt wrote:
I demand that Anthony Towns may or may not have written...
Put them behind a firewall on a trusted LAN, use them to develop software
for arm chips, and then just follow unstable or run non-security-supported
snapshots. Apart from writing software for embedded arm things, I can't see
the value
"Linux desktop box" comes to mind...
But why would you spend over 1000 pounds on an arm Linux desktop box 
instead of a few hundred pounds on a random i386 desktop box?

A reasonable answer is because you're developing for arm's for embedded 
applications; but if so, what's the big deal with using unstable or 
snapshots, and running your public servers on other boxes?

-- and if an arch is just going to be used for development, does it really
need all the support we give stable in order to make it useful for servers
and such?
Probably not, but ISTM that you'll first have to ascertain that it *is* only
being used for development before you can say that that support definitely
isn't needed.
Uh, you've got that round the wrong way: you don't do something because 
you can't say support definitely isn't needed, you do something because 
you *can* say support definitely *is* needed.

If so, why? If not, what level of support does it need, that goes beyond
"unstable + snapshotting facility", and why? Debian developers [...]
You're focusing too much on development here. There are users too, you
know... :-)
Haven't seen any evidence of it -- developers and vendors, yes, users, 
or uses, no...

Cheers,
aj
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: my thoughts on the Vancouver Prospectus

2005-03-19 Thread Andreas Rottmann
Bill Allombert <[EMAIL PROTECTED]> writes:

> On Sat, Mar 19, 2005 at 09:13:07AM +0100, Karsten Merker wrote:
>> On Fri, Mar 18, 2005 at 06:44:46PM -0800, Steve Langasek wrote:
>> > [cc:ed back to -devel, since these are technical questions being
>> > raised and answered]
>> 
>> > > * Why is the permitted number of buildds for an architecture
>> > > restricted to 2 or 3?
>> > 
>> > - Architectures which need more than 2 buildds to keep up with
>> > package uploads on an ongoing basis are very slow indeed; while
>> > slower, low-powered chips are indeed useful in certain
>> > applications, they are a) unlikely to be able to usefully run
>> > much of the software we currently expect our ports to build, and
>> > b) definitely too slow in terms of single-package build times to
>> > avoid inevitably delaying high-priority package fixes for RC
>> > bugs.
>> 
>> a) is true for some big packages like GNOME and KDE, but that
>> does not impede the architecture's usefulness for other software
>> we have in the archive.
>
> Also it is an example of ridiculously large source packages, which
> create other problems by themself like the amount of bandwidth wasted
> when one has to apply a one-line fix, in particular for security updates.
>
> Why not considering splitting those source packages? IIRC, this is
> planned for the X11 source packages. This seems a better option overall.
>
GNOME is already comprised of many source packages. I guess KDE is a
bigger problem, as it seems to have less and bigger source packages
and is C++, which is considerably more expensive to compile than C.

Rotty
-- 
Andreas Rottmann | [EMAIL PROTECTED]  | [EMAIL PROTECTED] | [EMAIL 
PROTECTED]
http://yi.org/rotty  | GnuPG Key: http://yi.org/rotty/gpg.asc
Fingerprint  | DFB4 4EB4 78A4 5EEE 6219  F228 F92F CFC5 01FD 5B62

Life is a sexually transmitted disease.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Emulated buildds (for SCC architectures)?

2005-03-19 Thread Thomas Bushnell BSG
Karsten Merker <[EMAIL PROTECTED]> writes:

> On Fri, Mar 18, 2005 at 06:58:50PM -0800, Thomas Bushnell BSG wrote:
> > Peter 'p2' De Schrijver <[EMAIL PROTECTED]> writes:
> > 
> > > A much faster solution would be to use distcc or scratchbox for
> > > crosscompiling.
> > 
> > Debian packages cannot be reliably built with a cross-compiler,
> > because they very frequently need to execute the compiled binaries as
> > well as just compile them.
> 
> I suppose the idea is to use distcc and a crosscompiler - that way
> the .c files are compiled to .o on a fast architecture with a
> crosscompiler, but the configure scripts, linker and so on run natively.

Right, but upstream makefiles don't work this way, and often
interleave compilation and execution of native code.

Consider a program that has its own source-code-building widget, which
it needs to compile and run to generate more code to compile.  


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-19 Thread Thomas Bushnell BSG
[EMAIL PROTECTED] (Marco d'Itri) writes:

> That on some servers I'd like to mirror both archives, and I'd rather
> not waste a few GB on duplicated files.

So don't duplicate them and use fancier mirroring software.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Emulated buildds (for SCC architectures)?

2005-03-19 Thread Paul Hampson
On Sat, Mar 19, 2005 at 08:21:18PM -0800, Thomas Bushnell BSG wrote:
> Karsten Merker <[EMAIL PROTECTED]> writes:
> 
> > On Fri, Mar 18, 2005 at 06:58:50PM -0800, Thomas Bushnell BSG wrote:
> > > Peter 'p2' De Schrijver <[EMAIL PROTECTED]> writes:

> > > > A much faster solution would be to use distcc or scratchbox for
> > > > crosscompiling.

> > > Debian packages cannot be reliably built with a cross-compiler,
> > > because they very frequently need to execute the compiled binaries as
> > > well as just compile them.

> > I suppose the idea is to use distcc and a crosscompiler - that way
> > the .c files are compiled to .o on a fast architecture with a
> > crosscompiler, but the configure scripts, linker and so on run natively.

> Right, but upstream makefiles don't work this way, and often
> interleave compilation and execution of native code.

> Consider a program that has its own source-code-building widget, which
> it needs to compile and run to generate more code to compile.  

That'll work. _All_ distcc sends to the crosscompiler is preprocessed c
code to be compiled into object code. So the source-code building widget
is compiled remotely, run locally, and the results are sent to compile
remotely.

-- 
---
Paul "TBBle" Hampson, MCSE
8th year CompSci/Asian Studies student, ANU
The Boss, Bubblesworth Pty Ltd (ABN: 51 095 284 361)
[EMAIL PROTECTED]

"No survivors? Then where do the stories come from I wonder?"
-- Capt. Jack Sparrow, "Pirates of the Caribbean"

This email is licensed to the recipient for non-commercial
use, duplication and distribution.
---


signature.asc
Description: Digital signature


Re: my thoughts on the Vancouver Prospectus

2005-03-19 Thread Anthony Towns
Matthew Garrett wrote:
This, uh, sounds very much like "We need to drop architectures, and so
we have come up with these criteria that will result in us dropping
architectures". Which is a reasonable standpoint to take, but which also
seems to imply that if 12 architectures manage to fulfil all the
criteria, we'll need to come up with some new criteria to ensure that
the number drops below 12 again. 
If they can all satisfy the criteria, they're likely to be doing well
enough that there's not much *point* to dropping them -- the reason 11
architectures are hard to manage is because they're not all being
supported at an adequate level. The criteria listed try to give a good
idea of what "an adequate level" is likely to look like.
But basically if the attitude is "this is just a hobby, it's for the
computer in my basement, I don't really care about putting in that much
time" instead of at least "this is *so* cool, this is the best
architecture on the planet, everyone should use it, because it's going
to dominate the UNIVERSE, and I'm not going to sleep 'til it does!!1!"
then the architecture just isn't going to make a stable release. But
from what I've seen of the responses, though I understand Steve's
received some others in private mail, there isn't much in the way of the
second attitude going around.
If this is the case, I think that needs to be made clearer to avoid
situations where people work to meet the criteria but are vetoed by the
release team because there are already too many architectures.
The main issue is the port needs to be on top of problems quickly and
effectively; in many cases we won't know what those problems are 'til
they happen (and thus can't say "your port mustn't have such-n-such a
problem"), the criteria listed are meant to be reasonably objective ways
a port team can demonstrate that they're able to handle problems that arise.
If there are 11 teams that can promptly and effectively handle any issue
that comes up for their respective ports, then I suspect there wouldn't
be a major issue releasing them all -- that was pretty much how it
seemed for woody (particularly for the new architectures since potato),
but, and I'm not speaking authoritatively or for anyone else here, it
really doesn't seem that that's the way things are now.
Cheers,
aj
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Emulated buildds (for SCC architectures)?

2005-03-19 Thread Thomas Bushnell BSG
[EMAIL PROTECTED] (Paul Hampson) writes:

> That'll work. _All_ distcc sends to the crosscompiler is preprocessed c
> code to be compiled into object code. So the source-code building widget
> is compiled remotely, run locally, and the results are sent to compile
> remotely.

Oh, I see now.  I was misunderstanding what distcc did.  Very clever!

This might not account for much of the time. I know that libc and X
compilation spends a huge amount of time in header file inclusion
work.  It works for those cases.

But other cases spend lots of time doing things like outline tracing
and other non-compilation activities.  But still, something like this
could well make a huge dent.



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Accepted valknut 0.3.7-1 (i386 source)

2005-03-19 Thread Pasi Savilaakso

> Bug #289643 was not a request for packaging the new upstream version: it
> was a bug report complaining about the program failing to start.  "New
> upstream version" has nothing to do with why this bug was closed.

Does valknut start now? Maybe new upstream version fixed that? I know changes 
in packaging didn't.
>
> Bug #269952 was not a request for packaging the new upstream version; it
> was a report about broken icons.

Are the Icons fixed now? Maybe new upstream version fixed that? I know changes 
in packaging didn't.

> Bug #265284 was not a request for packaging the new upstream version; it
> was a request to change some strings in the interface, which were changed
> upstream.  But "New upstream version" is not why this bug was closed.

does the new version have a corrected strings? Maybe new upstream version 
fixed that? I know changes in packaging didn't.

> Bug #270096 and bug #286234 are requests for the new upstream version.  So
> it is appropriate to list them as such.
>
> If you're going to use the upload bug-closing convenience feature, use it
> right -- your changelog should have something relevant to say about the
> bug, which is *not*, in this case, "New upstream version".

If new upstream version corrects bug, isn't it right close the bug? I had an 
impression that changelog is meant for the changes in packaging. If I haven't 
changed package in any way which has something to do with bug, what can I 
say? Ok, I can start adding upstream changelog items to debian changelog too 
when it deals bugs, if that is wanted. 

and IF you had read the package changelog 
from /usr/share/doc/valknut/changelog.Debian.gz  you should have noticed 
that, I have always "said something relevant" when it dealt packaging. Random 
attack is fun, isn't it?

Regards, 
Pasi Savilaakso


pgpxiWhOaNFxn.pgp
Description: PGP signature


Re: Accepted valknut 0.3.7-1 (i386 source)

2005-03-19 Thread Pasi Savilaakso
> You could say
>   * New Upstream release (Closes:#12345)
> - No more frobnication (Closes:#23456)
> - Fix random typos (Closes: #34567)
> - Fix random data loss (urgent) (Closes: #45678)
>
Thanks, Matthias.

I will remember this next time.  This is the way critique should be given. Not 
as Steve Lagasek attacked. This email was really helpful and actually 
educated me since as I said on my first mail I said "I don't really know what 
else I could say." This answers directly to that.

Regards, 
Pasi Savilaakso 


pgpSxLPoOs3aT.pgp
Description: PGP signature