Re: ITP: gzrt -- gzip recovery toolkit

2006-07-16 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Paul Wise wrote:
> On Thu, 2006-07-13 at 19:35 +0800, Paul Wise wrote:
> 
>> Please install cpio 2.5 or higher to facilitate recovery from
>> damaged gzipped tarballs.
> 
> I will drop the version from the description and add cpio to the 
> suggests.

Which is "stronger"?

> I added the suggestion to the description because I guess that
> .tar.gz will be the most common type of file being recovered and
> because suggests/recommends do not tell humans exactly how/why
> cpio is useful to install alongside gzrt.

I've always thought that it would be useful if the maintainer would
give some blur as to why package snagglefrob recommends pussywillow.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEueRlS9HxQb37XmcRAriLAKCRWqGdmBAXKpEcqvSEKSpbMgJmJwCgmtRn
LH8eEcBQYuVqT9e6/mt5R7A=
=NSBd
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Challenge: Binary free uploading

2006-07-16 Thread martin f krafft
also sprach Anthony Towns  [2006.07.16.0847 +0200]:
> At https://wiki.ubuntu.com/NoMoreSourcePackages is a description of
> the new world order for Ubuntu packages -- which will simplify making
> changes to Ubuntu packages to a matter of simply committing the change
> to the source repository with bzr, and running a new command something
> like "src publish edgy" to instruct the autobuilders to grab the source
> from the bzr repository, create a traditional source package, and start
> building it for all architectures.

http://blog.madduck.net/debian/2005.08.11-rcs-uploads ...

-- 
Please do not send copies of list mail to me; I read the list!
 
 .''`. martin f. krafft <[EMAIL PROTECTED]>
: :'  :proud Debian developer and author: http://debiansystem.info
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
windows v.i.s.t.a.: viruses, infections, spyware, trojans and adware


signature.asc
Description: Digital signature (GPG/PGP)


Re: Challenge: Binary free uploading

2006-07-16 Thread Erast Benson
On Sun, 2006-07-16 at 16:47 +1000, Anthony Towns wrote:
> Hi all,
> 
> At https://wiki.ubuntu.com/NoMoreSourcePackages is a description of
> the new world order for Ubuntu packages -- which will simplify making
> changes to Ubuntu packages to a matter of simply committing the change
> to the source repository with bzr, and running a new command something
> like "src publish edgy" to instruct the autobuilders to grab the source
> from the bzr repository, create a traditional source package, and start
> building it for all architectures.

Just as a side note, very similar idea implemented by Nexenta
GNU/OpenSolaris folks but on top of Subversion, called a "HackZone" [1].

Currently the entire Ubuntu/Dapper repository imported into Nexenta
subversion. Developers contributing to Nexenta utilizing branches and
taking advantage of COW while promoting experimental work to the "main"
branch. The entire paradigm also helps us to AutoMerge
"upstream" (Ubuntu/Dapper in this case) changes almost automatically.

Once Developer made its change in its "HackZone", he could just simply
commit & trigger AutoBuilder [2] by doing:

hackzone-commit -b 

[1] http://www.gnusolaris.org/gswiki/HackZone
[2] http://www.gnusolaris.org/cgi-bin/hackzone-web

Hope this info will be useful too.

Erast


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: ITP: gzrt -- gzip recovery toolkit

2006-07-16 Thread Martin Wuertele
* Paul Wise <[EMAIL PROTECTED]> [2006-07-16 07:17]:

> On Thu, 2006-07-13 at 19:35 +0800, Paul Wise wrote:
> 
> > Please install cpio 2.5 or higher to facilitate recovery from damaged
> > gzipped tarballs.
> 
> I will drop the version from the description and add cpio to the
> suggests. 
> 
> I added the suggestion to the description because I guess that .tar.gz
> will be the most common type of file being recovered and because
> suggests/recommends do not tell humans exactly how/why cpio is useful to
> install alongside gzrt.
 
You can recommend it, it just isn't necessary to point out the "2.5"
version as stable has 2.5-1.3 and etch will have 2.6 or newer. Just
change the description to something like

"You will need to install cpio to facilitate recovery from damaged
gzipped tarballs."

yours Martin
-- 
<[EMAIL PROTECTED]>  Debian GNU/Linux - The Universal Operating System
 Ah, vergessen -- Du verwendest ja auch das Biest emacs.
 jetzt wirst du beleidigend
 ,)
 weasel nutzt emacs?
 weasel: du emacs?
 Alfie: *TRET*
 So entstehen gerüch(t)e  *ggg*
 so entstehen leichen...



Re: ITP: gzrt -- gzip recovery toolkit

2006-07-16 Thread Thijs Kinkhorst
On Sun, 2006-07-16 at 13:14 +0800, Paul Wise wrote:
> On Thu, 2006-07-13 at 19:35 +0800, Paul Wise wrote:
> 
> > Please install cpio 2.5 or higher to facilitate recovery from damaged
> > gzipped tarballs.
> 
> I will drop the version from the description and add cpio to the
> suggests. 
> 
> I added the suggestion to the description because I guess that .tar.gz
> will be the most common type of file being recovered

I agree that that is a common type of file to recover, so that would
make it more appropriate to Recommend cpio rather than Suggest.


Thijs


signature.asc
Description: This is a digitally signed message part


Re: Challenge: Binary free uploading

2006-07-16 Thread Goswin von Brederlow
Anthony Towns  writes:

> Hi all,
>
> At https://wiki.ubuntu.com/NoMoreSourcePackages is a description of
> the new world order for Ubuntu packages -- which will simplify making
> changes to Ubuntu packages to a matter of simply committing the change
> to the source repository with bzr, and running a new command something
> like "src publish edgy" to instruct the autobuilders to grab the source
> from the bzr repository, create a traditional source package, and start
> building it for all architectures.
>
> We've recently seen an example of someone using some general features of
> the bug tracking system to mirror LaunchPad's features wrt tracking the
> status on other BTSes [0] -- what I'm wondering is if we can't manage to
> hack up a similar feature to that one for Debian with our current tools.
>
> The idea would be, I guess, to be able to setup pbuilder on a server
> somewhere, have it watch for a build instruction -- and then automatically
> check out the source, run a build with pbuilder, make the build log
> available, and if the build was successful, make the .changes file, the
> source and the binary packages available, so that they can be checked by
> hand, and uploaded to the archive. 
>
> For bonus points, have the server be able to automatically do the upload
> by the maintainer downloading the changes, signing it, and sending the
> signed changes file somewhere.
>
> For more bonus points, have the server be easy to setup (apt-get install
> some package, edit a conguration file), and work for all sorts of
> different revision control systems (CVS, Subversion, git, etc).
>
> Cheers,
> aj
>
> [0] http://lists.debian.org/debian-devel-announce/2006/05/msg1.html

Will you setup the Debian DAK to allow source only uploads and apply
patches to wanna-build and buildd for anyone willing to work on this?

Because if this is not also ment for Debian then you are slightly off
topic and Debian people should be told in advance that their work
would be solely for the competition.


Further, what is your opinion on the following claims:

- people won't test build their sources before upload anymore
- all those build failures will overload the buildds
- the untested debs will have far more bugs making sid even more
  unstable

Any other contras I've not repeated from the numerous past discussions
of banning binary uploads?

MfG
Goswin

PS: This is no attack on you, I'm just thrown off since your
challenge seems to contradict all past discussions.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Challenge: Binary free uploading

2006-07-16 Thread Bernhard R. Link
* Anthony Towns  [060716 08:48]:
> The idea would be, I guess, to be able to setup pbuilder on a server
> somewhere, have it watch for a build instruction -- and then automatically
> check out the source, run a build with pbuilder, make the build log
> available, and if the build was successful, make the .changes file, the
> source and the binary packages available, so that they can be checked by
> hand, and uploaded to the archive. 
> 
> For bonus points, have the server be able to automatically do the upload
> by the maintainer downloading the changes, signing it, and sending the
> signed changes file somewhere.

I think this should have some check reading the download-logs and
refuse the upload (and perhaps also delete all built files and
blacklisting the requestor for a month), if the generated .deb files
were not downloaded or the signed changes sent in within some absurd
short time making it inplausible the build was actually checked.
Something like a quarter of an hour, I'd suggest.

On a second thought, perhaps better half an hour and also checking the
.diff.gz was downloaded...

Hochachtungsvoll,
  Bernhard R. Link


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Getting the buildds to notice new architectures in a package

2006-07-16 Thread Adam Borowski
On Sat, Jul 15, 2006 at 10:55:32PM +0200, Ludovic Brenta wrote:
> I will upload ~20 source packages in the next few weeks, adding
> support for more architectures to each package.  So I'm really looking
> for a general solution and not one that only applies to asis.

Why aren't those packages arch:any?  "asis" neither uses any hardware
devices, nor appears to have assembly code anywhere inside.

-- 
1KB // Microsoft corollary to Hanlon's razor:
//  Never attribute to stupidity what can be
//  adequately explained by malice.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Challenge: Binary free uploading

2006-07-16 Thread Anthony Towns
On Sun, Jul 16, 2006 at 10:12:37AM +0200, Goswin von Brederlow wrote:
> Will you setup the Debian DAK to allow source only uploads and apply
> patches to wanna-build and buildd for anyone willing to work on this?

No. All the above should be doable without needing any changes to any of
the project infrastructure -- all it does is change how the initial upload
is prepared. In other words, it's a purely technical challenge, no policies
or politics needed.

Cheers,
aj



signature.asc
Description: Digital signature


Re: Challenge: Binary free uploading

2006-07-16 Thread Anthony Towns
On Sun, Jul 16, 2006 at 09:10:20AM +0200, martin f krafft wrote:
> also sprach Anthony Towns  [2006.07.16.0847 +0200]:
> > At https://wiki.ubuntu.com/NoMoreSourcePackages is a description of
> > the new world order for Ubuntu packages -- which will simplify making
> > changes to Ubuntu packages to a matter of simply committing the change
> > to the source repository with bzr, and running a new command something
> > like "src publish edgy" to instruct the autobuilders to grab the source
> > from the bzr repository, create a traditional source package, and start
> > building it for all architectures.
> http://blog.madduck.net/debian/2005.08.11-rcs-uploads ...

Wow has it really been that long?

Has any code come of it yet?

Cheers,
aj



signature.asc
Description: Digital signature


Re: ITP: gzrt -- gzip recovery toolkit

2006-07-16 Thread Adam Borowski
On Sun, Jul 16, 2006 at 10:11:41AM +0200, Thijs Kinkhorst wrote:
> On Sun, 2006-07-16 at 13:14 +0800, Paul Wise wrote:
> > I will drop the version from the description and add cpio to the
> > suggests. 
> > 
> > I added the suggestion to the description because I guess that .tar.gz
> > will be the most common type of file being recovered
> 
> I agree that that is a common type of file to recover, so that would
> make it more appropriate to Recommend cpio rather than Suggest.

"a common type"?  Come on, that's not just "common", it's "a vast
majority of cases".  And, a hard Depend on a small priority=important
package is not a big burden -- what about just having a dependency
without the comment?

-- 
1KB // Microsoft corollary to Hanlon's razor:
//  Never attribute to stupidity what can be
//  adequately explained by malice.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Challenge: Binary free uploading

2006-07-16 Thread martin f krafft
also sprach Anthony Towns  [2006.07.16.1320 +0200]:
> > http://blog.madduck.net/debian/2005.08.11-rcs-uploads ...
> 
> Wow has it really been that long?
> 
> Has any code come of it yet?

Well, for one I have not really gotten any input from people, but
that's also partially my fault. I was also disabled for most of the
past 10 months, so no, there hasn't been any code produced from my
end.

I have pretty good ideas how to implement this but have not figured
out how to actually do the certificates. And to be honest, I see
a lot more potential in this idea than Ubuntu chose to implement,
but they are limiting themselves to bzr anyway.

At the core of my approach would be .changes files, which already
list the components of an upload, currently using single filenames.
I don't see a reason why those couldn't be URIs. 

An upload request (as I call them) would be a .changes file sent to
the buildd, which would check it for validity and then start
fetching the components to assemble the source package. So the
orig.tar.gz file would be fetched by the buildd e.g. from upstream
(and checked against size/hash in .changes), the diff.gz generated
by taking the diff -Nru of the unpacked orig against a checkout
defined by another URI, e.g.

  svn://svn.debian.org/svn/collab-main/hibernate/tags/[EMAIL PROTECTED]

(or git, or hg, or bzr, or CVS, just a tarball, or a diff itself),
and potentially have the size/hash of the tree verified -- even
though theoretically, r892 of the above repository cannot be changed
once it's committed, there *are* ways, especially when the
repository isn't hosted on a debian.org machine anymore. And
finally, the DSC file would be generated by the build daemon on the
fly.

This is all fairly straight forward and could be implemented in
a few days. I agree with aj, btw, that this has to be done on
a separate machine first, until it's all tested. Then it should be
merged into dak.

The easiest way to get all this done is by having the buildd send
a generated, standard changes file (the format we know it, not the
one with URIs) and do the upload when it receives the changes file
from the maintainer with a valid DD signature.

However, this would not do it for me, because ideally, the changes
file would be sent to multiple people (my goal remains solving the
bottleneck problem), and there is no way to ensure that these people
actually tested the same package that the buildd assembled -- and
it's just too easy to sign and send back a changes file when you are
currently too busy with other things.

Thus, my idea was to require a certain number of certificates to be
attached to the changes file, which prove that the source has been
tested. E.g. lintian could issue a certificate just as well as
dpkg-buildpackage could issue one when the package successfully
builds (although a piuparts certificate would make that obsolete).
The buildd would check whether there are special requirements for
the specific source package it's assembling, or otherwise fall back
to the default (e.g. libc6 may require 3 developers to sign off an
upload, while the maintainer of ipcalc doesn't think piuparts is
necessary; all other packages require proof of building of the
binary and signoff by a single developer, which is the current
default). Only if the requirements have been met, then the buildd
goes ahead to process the package.

While it's easy to conceive such certificates, and easy to add such
functionality to the checker programmes, it seems impossible to make
it such that they cannot be faked. My take is that we are not trying
to guard against malicious uploads, we are just trying to make
quality assurance more flexible for the requirements of distributed
package maintenance; thus, as soon as we have a certificate system
that may not be secure, but which makes manual certificate
generation (cheating the system) more time-consuming or tedious than
running the checkers and fixing the issues, it's all good. If we
later find that people are going the easy way and e.g. just add
lintian overrides instead of fixing issues, just to get the
certificate they want, we can/should/will resort to other means
anyway.

But there remains one problem with this approach, and this relates
to dak: I think it's very doable to invent a system that builds
binaries from multiple sources (not just source packages), but for
such a binary to make it into the archive still requires a signed
.changes file which dak can read (and dak does not know about
svn:// etc.). Thus, we basically get to the same problem that our
buildd maintainers are facing and it seems we cannot get around to
manual signing of the generated changes files by a developer unless
we beef up dak to be satisfied with the proposed changes file
format.

Anyway, as you can see, this issue certainly strikes my interest and
I am going to Limerick next week to officially start work on my
Ph.D., for which this is certainly a relevant topic. Thus, I'd love
to hear from others who'd be int

Bug#378445: ITP: gsf-sharp -- CLI bindings for libgsf

2006-07-16 Thread Jose Carlos Garcia Sogo
Package: wnpp
Severity: wishlist
Owner: Jose Carlos Garcia Sogo <[EMAIL PROTECTED]>

* Package name: gsf-sharp
  Version : 0.7.0
  Upstream Author : Martin Willemoes Hansen <[EMAIL PROTECTED]>
* URL : http://svn.myrealbox.com/source/trunk/gsf-sharp/
* License : LGPL
  Programming Lang: C#
  Description : CLI bindings for libgsf

 A CLI library for reading and writing structured files (eg MS OLE and
 Zip)

-- System Information:
Debian Release: testing/unstable
  APT prefers unstable
  APT policy: (500, 'unstable'), (101, 'experimental')
Architecture: i386 (i686)
Shell:  /bin/sh linked to /bin/bash
Kernel: Linux 2.6.17-1-686
Locale: [EMAIL PROTECTED], [EMAIL PROTECTED] (charmap=UTF-8)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Challenge: Binary free uploading

2006-07-16 Thread Thijs Kinkhorst
On Sun, 2006-07-16 at 14:24 +0200, martin f krafft wrote:
> While it's easy to conceive such certificates, and easy to add such
> functionality to the checker programmes, it seems impossible to make
> it such that they cannot be faked.

I don't like the certificate idea for two reasons.

First, if you want to make sure that no packages with e.g. lintian
errors enter the archive, you can make a lot simpler system by just
running lintian server-side. There's no cheating possible, there's no
complex certificate infrastructure required.

But more importantly, I don't think that strictly requiring that a
package is lintian errors clean is a good idea anyway. Suppose that
there's a security bug in a package that I want to fix quickly. Lintian
yields an error that was already present in the previous package. I
can't upload just the security fix unless I fix that other error aswell.


Thijs


signature.asc
Description: This is a digitally signed message part


Re: Challenge: Binary free uploading

2006-07-16 Thread martin f krafft
also sprach Thijs Kinkhorst <[EMAIL PROTECTED]> [2006.07.16.1521 +0200]:
> But more importantly, I don't think that strictly requiring that a
> package is lintian errors clean is a good idea anyway. Suppose that
> there's a security bug in a package that I want to fix quickly. Lintian
> yields an error that was already present in the previous package. I
> can't upload just the security fix unless I fix that other error aswell.

First, which certificates are required can be defined for each
source package. Second, I could well imagine an override-style
certificate for emergency uploads.

About running logcheck on the server, the problem is simply the time
it takes until the user gets feedback, and server load.

-- 
Please do not send copies of list mail to me; I read the list!
 
 .''`. martin f. krafft <[EMAIL PROTECTED]>
: :'  :proud Debian developer and author: http://debiansystem.info
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
"i think, therefore i'm single"
  -- lizz winstead


signature.asc
Description: Digital signature (GPG/PGP)


Re: Getting the buildds to notice new architectures in a package

2006-07-16 Thread Wouter Verhelst
On Sat, Jul 15, 2006 at 10:55:32PM +0200, Ludovic Brenta wrote:
> Where should I ask for help?  Neither buildd.debian.org nor
> www.debian.org/devel/buildd, mention where the buildd admins can be
> reached; and lists.debian.org does not have a "buildd@" list.

<[EMAIL PROTECTED]>. I just committed a change to the
wanna-build-states page to that effect.

> I will upload ~20 source packages in the next few weeks, adding
> support for more architectures to each package.  So I'm really looking
> for a general solution and not one that only applies to asis.

There is no such general solution. See


-- 
Fun will now commence
  -- Seven Of Nine, "Ashes to Ashes", stardate 53679.4


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Getting the buildds to notice new architectures in a package

2006-07-16 Thread Ludovic Brenta
Wouter Verhelst <[EMAIL PROTECTED]> writes:
> On Sat, Jul 15, 2006 at 10:55:32PM +0200, Ludovic Brenta wrote:
>> Where should I ask for help?  Neither buildd.debian.org nor
>> www.debian.org/devel/buildd, mention where the buildd admins can be
>> reached; and lists.debian.org does not have a "buildd@" list.
>
> <[EMAIL PROTECTED]>. I just committed a change to the
> wanna-build-states page to that effect.

Thanks; I was aware of these, but the problem was not with any one
architecture in particular; it was with Packages-arch-specific.
Luk Claes pointed me to it, and I've submitted a request to the
admins.  (BTW, thanks, Luk.)

It would perhaps be a good idea to mention the existence of
Package-arch-specific, how it works, and who admins it on the
buildd.debian.org front page, don't you think?

Also, I would propose that a list, [EMAIL PROTECTED], or even better, a
pseudo-package, buildd, be created for such issues. buildd would
complement ftp.debian.org as a central place for buildd-related
requests.

>> I will upload ~20 source packages in the next few weeks, adding
>> support for more architectures to each package.  So I'm really looking
>> for a general solution and not one that only applies to asis.
>
> There is no such general solution. See
> 

Thanks.  I had already read that.  It says:

! A package in not-for-us or packages-arch-specific will not leave
! this state automatically; if your package specifically excluded a
! given architecture in its control file previously, but now includes
! more architectures, it must be manually requeued".

But it does not say how I should go about "reque manually".

-- 
Ludovic Brenta.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Getting the buildds to notice new architectures in a package

2006-07-16 Thread Wouter Verhelst
On Sun, Jul 16, 2006 at 06:31:56PM +0200, Ludovic Brenta wrote:
> Wouter Verhelst <[EMAIL PROTECTED]> writes:
> > On Sat, Jul 15, 2006 at 10:55:32PM +0200, Ludovic Brenta wrote:
> >> Where should I ask for help?  Neither buildd.debian.org nor
> >> www.debian.org/devel/buildd, mention where the buildd admins can be
> >> reached; and lists.debian.org does not have a "buildd@" list.
> >
> > <[EMAIL PROTECTED]>. I just committed a change to the
> > wanna-build-states page to that effect.
> 
> Thanks; I was aware of these, but the problem was not with any one
> architecture in particular; it was with Packages-arch-specific.

There is a large overlap between the maintainers of
packages-arch-specific and the buildd maintainers. This is only normal,
since p-a-s exists only to the benefit of buildd...

> Luk Claes pointed me to it, and I've submitted a request to the
> admins.  (BTW, thanks, Luk.)
> 
> It would perhaps be a good idea to mention the existence of
> Package-arch-specific, how it works, and who admins it on the
> buildd.debian.org front page, don't you think?

No. Buildd.debian.org is an interface which shows build logs to
non-buildd people (us buildd maintainers get relevant logs in our
mailboxes around the time they appear on buildd.d.o anyway). It is not
the place where buildd is documented, nor should it be; there are other
places for that.

(it *might* be a good idea for buildd.d.o to point to the relevant
documentation, but you need to talk to Ryan Murray to get that :-)

> Also, I would propose that a list, [EMAIL PROTECTED], or even better, a
> pseudo-package, buildd, be created for such issues. buildd would
> complement ftp.debian.org as a central place for buildd-related
> requests.

This has been proposed before. 

It would only work if buildd maintainers agree to use it. Personally, I
feel that a generic "buildd" list for "all" architectures is a bit over
the top (there is rarely ever need for that, it would probably only be
abused (intentionally or otherwise) by people who don't need to contact
all buildd maintainers anyway).

> >> I will upload ~20 source packages in the next few weeks, adding
> >> support for more architectures to each package.  So I'm really looking
> >> for a general solution and not one that only applies to asis.
> >
> > There is no such general solution. See
> > 
> 
> Thanks.  I had already read that.  It says:
> 
> ! A package in not-for-us or packages-arch-specific will not leave
> ! this state automatically; if your package specifically excluded a
> ! given architecture in its control file previously, but now includes
> ! more architectures, it must be manually requeued".
> 
> But it does not say how I should go about "reque manually".

Yes, that's the change I have just committed ($arch@). Some
architectures do still use not-for-us rather than p-a-s. It's better to
use the latter (since it isn't arch-speicific), but if hasn't been used,
then it doesn't help to contact a p-a-s maintainer, since he may not be
able to get the package in the needs-build state.

That's why I suggeted contacting the relevant buildd maintainer.

-- 
Fun will now commence
  -- Seven Of Nine, "Ashes to Ashes", stardate 53679.4


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Challenge: Binary free uploading

2006-07-16 Thread Wouter Verhelst
On Sun, Jul 16, 2006 at 04:47:12PM +1000, Anthony Towns wrote:
> Hi all,
> 
> At https://wiki.ubuntu.com/NoMoreSourcePackages is a description of
> the new world order for Ubuntu packages -- which will simplify making
> changes to Ubuntu packages to a matter of simply committing the change
> to the source repository with bzr, and running a new command something
> like "src publish edgy" to instruct the autobuilders to grab the source
> from the bzr repository, create a traditional source package, and start
> building it for all architectures.
> 
> We've recently seen an example of someone using some general features of
> the bug tracking system to mirror LaunchPad's features wrt tracking the
> status on other BTSes [0] -- what I'm wondering is if we can't manage to
> hack up a similar feature to that one for Debian with our current tools.
> 
> The idea would be, I guess, to be able to setup pbuilder on a server
> somewhere,

Why pbuilder? It's a great tool to check build-deps, and it's a great
tool to casually build packages from time to time; but if you're really
going to get rid of binaries in uploads, I think the more efficient way
to do so would be to hack sbuild and buildd to do so.

> have it watch for a build instruction -- and then automatically
> check out the source, run a build with pbuilder, make the build log
> available, and if the build was successful, make the .changes file, the
> source and the binary packages available, so that they can be checked by
> hand, and uploaded to the archive. 
> 
> For bonus points, have the server be able to automatically do the upload
> by the maintainer downloading the changes, signing it, and sending the
> signed changes file somewhere.
[...]

buildd already has all that.
* wanna-build has lists of packages that need to be built, and buildd
  grabs packages out of those lists. The wanna-build database is
  currently fed by some scripts that are part of dak, but there's
  nothing preventing anyone from writing different scripts and/or
  modifying wanna-build slightly.
* build logs are on buildd.d.o.
* .changes files are part of the build log, and are clearly marked so
  they can be mechanically extracted by a sed one-liner (possibly a perl
  one-liner too, not sure about that bit).
* uploads are done by sending signed .changes files to the buildd host
  (the exact mail address to be used depends on the exact buildd host in
  use, obviously).

You would only need to create some scripts to populate the wanna-build
database, plus modify sbuild so that it knows how to fetch a source
package from a version control system rather than from a Debian mirror.
The rest would probably work as is.

All that being said, I'm not convinced doing sourceless uploads is
actually a good idea. It's been proposed in the past, but I've never
seen arguments that convinced me it would be a good idea. The difference
with this idea is that you could set it up so that the original binary
upload would be done out of your source repository, which would then
do a sourceful upload to ftp-master which in turn would trigger builds
on other architectures; that way, you wouldn't bother other
architectures with untested builds.

But we'll still have issues.

For starters, we'd need a *lot* of hardware to be able to do all these
builds. Many of them will fail, because there *will* be people who will
neglect to test their builds, and they will hog the machine so that
other people (who do test properly) have to wait a long time for their
build to happen.

Ubuntu has a lot more money behind them than Debian does, so they can
mitigate this problem by simply buying more hardware. How do you suggest
Debian would tackle this problem?

-- 
Fun will now commence
  -- Seven Of Nine, "Ashes to Ashes", stardate 53679.4


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Depends vs. Recommends (Was: Bug#378112: ITP: gzrt -- gzip recovery toolkit)

2006-07-16 Thread Jens Peter Secher
Adam Borowski <[EMAIL PROTECTED]> writes:

> On Sun, Jul 16, 2006 at 10:11:41AM +0200, Thijs Kinkhorst wrote:
>> 
>> I agree that that is a common type of file to recover, so that would
>> make it more appropriate to Recommend cpio rather than Suggest.
>
> "a common type"?  Come on, that's not just "common", it's "a vast
> majority of cases".  And, a hard Depend on a small priority=important
> package is not a big burden -- what about just having a dependency
> without the comment?

No.

And the reason can be found in Policy section 7.2:

Depends

This declares an absolute dependency. A package will not be
configured unless all of the packages listed in its Depends
field have been correctly configured.

The Depends field should be used if the depended-on package is
required for the depending package to provide a significant
amount of functionality.

The Depends field should also be used if the postinst, prerm or
postrm scripts require the package to be present in order to
run. Note, however, that the postrm cannot rely on any
non-essential packages to be present during the purge phase.

Recommends

This declares a strong, but not absolute, dependency.

The Recommends field should list packages that would be found
together with this one in all but unusual installations.


The dependency system is used to make sure things don't break on the
_system_ level.  To ease upgrades, transitions, etc., dependencies
(Depends) should be kept to the absolute minimum.

Cheers,
-- 
Jens Peter Secher
_DD6A 05B0 174E BFB2 D4D9 B52E 0EE5 978A FE63 E8A1 jpsecher gmail com_
A. Because it breaks the logical sequence of discussion
Q. Why is top posting bad?


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: cdrtools

2006-07-16 Thread Josselin Mouette
Le mercredi 12 juillet 2006 à 01:02 +0100, Matthew Garrett a écrit :
> Now, this can quite easily be worked around by Joerg agreeing that all 
> of the software in the cdrecord tarball can be treated under the terms 
> of the CDDL (assuming that he has the right to do so, of course - any 
> significant patches that have been contributed by people under the terms 
> of the GPL would have to be rewritten or permission granted by the 
> authors). Then it just ends up being a "Is CDDLed material acceptable 
> for Debian?" argument, which is much more straightforward but not really 
> suited for the debian-devel mailing list.

As long as he keeps the "you cannot change this part of the code" blurb,
the most problematic issue remains. The GFDL GR made it very clear that
we won't accept invariant sections, and this is even more true for code.
This is a fundamental disagreement between Joerg Schilling and the
project, and unless he removes that blurb there is no way recent
cdrecord versions can be packaged in main.
-- 
 .''`.   Josselin Mouette/\./\
: :' :   [EMAIL PROTECTED]
`. `'[EMAIL PROTECTED]
  `-  Debian GNU/Linux -- The power of freedom


signature.asc
Description: Ceci est une partie de message	numériquement signée


Re: A question on setting setuid bit

2006-07-16 Thread Josselin Mouette
Le vendredi 07 juillet 2006 à 23:54 +0200, Javier Fernández-Sanguino
Peña a écrit :
> I can do the security risk analysis for you: granting remote root through a 
> web
> server application is a recipe for disaster, those tactics where (or should
> have been) abandoned ages ago. 

Unfortunately webmin is still in use in many setups...
-- 
 .''`.   Josselin Mouette/\./\
: :' :   [EMAIL PROTECTED]
`. `'[EMAIL PROTECTED]
  `-  Debian GNU/Linux -- The power of freedom


signature.asc
Description: Ceci est une partie de message	numériquement signée


Re: Challenge: Binary free uploading

2006-07-16 Thread Stephen Gran
This one time, at band camp, Wouter Verhelst said:
> All that being said, I'm not convinced doing sourceless uploads is
> actually a good idea. It's been proposed in the past, but I've never
> seen arguments that convinced me it would be a good idea. The difference
> with this idea is that you could set it up so that the original binary
> upload would be done out of your source repository, which would then
> do a sourceful upload to ftp-master which in turn would trigger builds
> on other architectures; that way, you wouldn't bother other
> architectures with untested builds.
> 
> But we'll still have issues.
> 
> For starters, we'd need a *lot* of hardware to be able to do all these
> builds. Many of them will fail, because there *will* be people who will
> neglect to test their builds, and they will hog the machine so that
> other people (who do test properly) have to wait a long time for their
> build to happen.

Why not just require binary uploads, and then chuck the binary away?
Then we are where we are today (someone managed to get the thing to
build at least once), but all debs are built from source on the buildd's.
-- 
 -
|   ,''`.Stephen Gran |
|  : :' :[EMAIL PROTECTED] |
|  `. `'Debian user, admin, and developer |
|`- http://www.debian.org |
 -


signature.asc
Description: Digital signature


Re: Challenge: Binary free uploading

2006-07-16 Thread Goswin von Brederlow
martin f krafft <[EMAIL PROTECTED]> writes:

> An upload request (as I call them) would be a .changes file sent to
> the buildd, which would check it for validity and then start
> fetching the components to assemble the source package. So the

At home I had my buildd setup so I could just dump an url _for_ a
changes file into a web frontent, it would fetch it, verify the
signature, fetch the files, verify them and then trigger a build.

Should buildds realy have to know how to use several VCS systems to
generate a source package? Unless this is ment as "write code at home,
test compile on buildd" setup I don't see much advantage of this and a
lot of temptation to skip testing the source before build.

A simple upload queue to dupload changes files to and maybe a web
interface to enter urls for changes files should be enough to do a
final "does this build cleanly" test before an upload. The source and
changes file should always be available from local tests the developer
did.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Getting the buildds to notice new architectures in a package

2006-07-16 Thread Goswin von Brederlow
Wouter Verhelst <[EMAIL PROTECTED]> writes:

> On Sun, Jul 16, 2006 at 06:31:56PM +0200, Ludovic Brenta wrote:
>> Also, I would propose that a list, [EMAIL PROTECTED], or even better, a
>> pseudo-package, buildd, be created for such issues. buildd would
>> complement ftp.debian.org as a central place for buildd-related
>> requests.
>
> This has been proposed before. 
>
> It would only work if buildd maintainers agree to use it. Personally, I
> feel that a generic "buildd" list for "all" architectures is a bit over
> the top (there is rarely ever need for that, it would probably only be
> abused (intentionally or otherwise) by people who don't need to contact
> all buildd maintainers anyway).

The buildd package could just be a central hub where two or three
knowlegable people sift through the bug reports and then distribute it
to the affected/responsible person.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Challenge: Binary free uploading

2006-07-16 Thread Anthony Towns
On Sun, Jul 16, 2006 at 08:14:48PM +0200, Wouter Verhelst wrote:
> For starters, we'd need a *lot* of hardware to be able to do all these
> builds. Many of them will fail, because there *will* be people who will
> neglect to test their builds, and they will hog the machine so that
> other people (who do test properly) have to wait a long time for their
> build to happen.

As it stands, I don't think this would be a shared service; but rather
something people setup on their own -- so you edit on your laptop, commit
to your server, and have the build happen remotely so you don't hear the
disk grind, or have your load average increase while you're busy trying
to play armagetron... It could be shared for team maintained things like
the X packages, but at least initially, I wouldn't think that would be
worth worrying about.

That's also why I lean towards pbuilder instead of sbuild -- sbuild is
great for building lots of packages continually; but pbuilder's better for
setting up quickly and easily without having to put much thought into it.

My guess would be that it ought to be possible to hack up a pretty simple
shell script that does this usefully, then build on it from there.

Cheers,
aj



signature.asc
Description: Digital signature


Which kernels are vulnerable?

2006-07-16 Thread Izak Burger

Hi all,

Had an argument over the weekend about which kernels are vulnerable to
the exploit that was used to take gluck down.  I maintained that only
kernels >= 2.6.13 and <= 2.6.17.4 are vulnerable, but in the end I
proved myself wrong when I took the exploit code, changed the line
that says:

   prctl(PR_SET_DUMPABLE, 2)

to

   prctl(PR_SET_DUMPABLE, 1)

and ran it on a sarge box running 2.6.8 (not sure exactly which
version), and STILL got a root prompt back.  This sarge machine runs
the kernel it was installed with, that is the one on the 3.1r0a cd
image (I need to upgrade it obviously).

I then tried the same modified exploit on a vulnerable 2.6.15, and it
failed (ie, on 2.6.15 it only succeeds if you call it with
PR_SET_DUMPABLE argument = 2).

My questions: is this a different bug?  When was it fixed and what are
the relevant advisory numbers?

regards,
Izak


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]