Re: DEP18 follow-up: What would be the best path to have all top-150 packages use Salsa CI?

2024-08-23 Thread Theodore Ts'o
On Tue, Aug 20, 2024 at 06:35:52PM -0700, Otto Kekäläinen wrote:
> 
> In short:
> I would very much like to see all top-150 packages run Salsa CI at
> least once before being uploaded to unstable. What people think is a
> reasonable way to proceed to reach this goal?

Since I'm the e2fsprogs (one of the top-150 packages) maintainer, I
thought I would take a look at this.  And since I'm not super savvy
about Salsa --- e2fsprogs does have a Salsa git repro, but it's not
the primary.  My primary git repositories are on github.com and
kernel.org.  So I did a Google search for "Debian Salsa CI" --- and
found very little useful for me to understand more about Salsa CI.

For background, I am using Github's CI to make make sure that there
are no build regressions or new Salsa warnings on Linux, Windows, and
MacOS.  I also do test builds using dgit, wired to git-buildpackage
and building using schroot; the test builds run a Lintian check and
run e2fsprogs's "make check" regression test.  I'm not brave enough to
run Debian unstable on my Development system, so I will also do a
backport to Debian-testing built using git-buildpackage, and I do a
dogfood test run on my developer workstation before I upload.  Also,
aspart of my upstream development, I am regularly doing manual test
builds on Debian stable, and create debian packages for e2fsprogs
which are integrated into the gce-xfstests[1] test appliance and make
sure there are no test regressions found when running xfstests to test
latest kernel (but which sometimes pick up e2fsprogrs regression,
although 99.99% of the time regressions are picked up using
e2fsprogs's built-in regression test suite).

[1] https://thunk.org/gce-xfstests

So here are the questions that would be **really** nice if it was
easily accessible to a prospective Debain maintainer:

1) From a technical respective, what does Salsa CI buy me?  Is it just
doing a build and sources using "configure ; make ; make check"?  Is
it doing a dpkg-buildackage?  Is it going to do the equivalent of
autopkgtest?  Maybe it's in the Debconf 2019 presentation, the video
is #5 on the Google searches, but I was too lazy to roll the video.
If slides were easily accessible, I probably would have looked at the
slides, but I wasn't able to easily find them.

2) If I'm already using Github's CI, and have autopkgtest, what are
the benefits for using Salsa CI?  (Especially given the amount of
testing that I'm doing already, why should I spend more time enabling
Salsa CI?)

3)  What's the simple recipe for enable Salsa CI?

4)  Where do the results for Salsa CI end up getting reported?

Sorry if these were all stupid questions, but I couldn't find the
answers easily, so I figured I'd ask on this e-mail thread.  :-)

 - Ted



Re: DEP18 follow-up: What would be the best path to have all top-150 packages use Salsa CI?

2024-08-23 Thread Theodore Ts'o
On Fri, Aug 23, 2024 at 03:08:11PM +0200, Marco d'Itri wrote:
> > Salsa CI?)
> The effort needed to do so is so small that the question really should 
> be "why should I NOT spend a few seconds enabling Salsa CI?".
> 
> > 3)  What's the simple recipe for enable Salsa CI?
> salsa update_projects $NAMESPACE/$PROJECT \
>   --jobs yes --ci-config-path recipes/debian.yml@salsa-ci-team/pipeline

OK, more stupid questions.  What is "$NAMESPACE"?

And I thought I saw something about a debin/salsa-ci.yml file?

And is this web page authoratative?  Or just a false search positive?

https://salsa.debian.org/salsa-ci-team/pipeline#basic-use

It doesn't mention the "salsa" command at all, but maybe that isn't
the right web page.  This goes back to my observation that it would be
helpful if there was better documentation to make life easier for
package maintainers.

- Ted



Re: Bug#283578: ITP: hot-babe -- erotic graphical system activitymonitor

2004-12-13 Thread Theodore Ts'o
On Sun, Dec 12, 2004 at 12:28:08AM +1100, Hamish Moffatt wrote:
> 
> Not really. The rest of the explanation for non-US is that those
> packages weren't illegal to USE in the USA, but were illegal to
> EXPORT. We don't have a section for packages that you aren't
> allowed to have, or aren't allowed to use.

France makes it illegal to use or posses cryptography (at least at one
time, during the height of the crypto iron wall era --- Al Gore, Tear
Down This Wall!).  Yet we still shipped crypto code despite the fact
that possessession of crypto without a license could land you in jail
in France.

Saying that we won't ship code just because it might be illegal in
some random country is a very slippery slope.

- Ted




Re: eleventh-hour transition for mysql-using packages related to apache

2005-03-02 Thread Theodore Ts'o
On Fri, Jan 28, 2005 at 05:03:26AM -0800, Steve Langasek wrote:
> As a result, in spite of the timing wrt the release, I'm proposing a
> transition to libmysqlclient12 for a number of packages for sarge.  The
> packages listed below are those packages currently in sarge which either are
> broken with MySQL 4.x, or have the possibility of conflicting with one of the
> packages that do (mostly by being loaded by a webserver such as apache or
> apache2, or being mysql bindings for a language that also has ODBC bindings).

Out of curiosity, where are we with this at this point?  My system
(currently running unstable, but it from the your description it
sounds like it may be happening on sarge as well) has an
apache2/mysql/php4 combination which blows up the moment you try to
open a connection to a mysql database.  That seems to be rather
unfortunate for those silly people like myself that are trying to
setup a LAMP stack.  

What is the best thing to do at this point?  Tell folks to use
MySQL 3.x instead?

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



[EMAIL PROTECTED]: Re: Bug#343662: fsck errors halting boot after upgrade]

2005-12-17 Thread Theodore Ts'o
Fixing this the right way will require changing when Debian boot
scripts run hwclock (as the first very thing), and will require making
changes to util-linux, the installer (so that /etc/zoneinfo is not a
symlink, and so that the information about what the local timezone is
stored somewhere else other than the symlink), and libc (so that
/etc/zoneinfo can be refreshed as part of the postinstall package).

This is messy, but it's what Red Hat does, and fixes the bug reported
below.  It really is bad that the system clock is wrong for a large
part of the initial boot process (until possibly after
/etc/rcS.d/S50hwclock.sh is run, if /usr is a separately mounted
filesystem).  I can't see another way of fixing this, though; before I
start lobbying the maintainers of the above-mentioned packages, does
anyone have any suggestions about a better way to deal with this
issue?

Thanks, regards,

- Ted
--- Begin Message ---
On Fri, Dec 16, 2005 at 04:16:42PM -0800, Andrew Sackville-West wrote:
> Package: e2fsprogs
> Version: 1.39
> 
> This is specifically version 1.39 WIP (10-Dec-2005)
> 
> /dev/hda3: Superblock last mount time is in the future
> /dev/hda3: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY
> 

Are you using your system with hardware time set to some non-GMT local
time zone?  (i.e. /etc/defaults/rcS has UTC="no")  

I didn't test for this case, and so I didn't realize a problem --- the
timezone offset isn't corrected at the time when fsck is run (at least
not on Debian systems) and e2fsck depends on the time being correct.
In the past, we more or less got by with the time being wrong (for
systems who use a non-GMT hardware clock); it meant that the last
checked time was set incorrectly, and inode delete times would also be
set correctly, but the failures were more or less harmless.

Unfortunately, I added this test in order to address problems caused
by the last mount time not being correct (see Debian bug #327580) only
to realize this was a much larger issue.

This isn't an issue on Red Hat systems, because /etc/localtime is not
a symlink (into possibly a not-yet mounted /usr filesystem), and they
make sure the system clock is correct *before* running fsck on the
root filesystem.  I personally keep my system clock on UTC, and so
this problem doesn't show up.

I think you can make the problem go away by making /etc/localtime
contain a copy of what it is currently symlinked to in
/usr/share/zoneinfo/, and renaming
/etc/rcS.d/S22hwclockfirst.sh to /etc/rcS.d/S09hwclockfirst.sh.  This
is obviously not the "proper" fix; since among other things if the
localtime file needs to get updated (for example if the US Congress
changes the definition of daylight savings time), we need a way to
make sure /etc/localtime gets updated when the package gets updated.  

But I believe if you were to apply the above as a workaround, it
should address your problem.  Fixing this in the more global sense
will require making changes in the overall Debian boot setup, and I'm
going to have to take this up on debain-devel and consult other Debian
developers.

Regards,

- Ted
--- End Message ---


Re: [EMAIL PROTECTED]: Re: Bug#343662: fsck errors halting boot after upgrade]

2005-12-19 Thread Theodore Ts'o
On Sun, Dec 18, 2005 at 10:37:06PM -0500, Anthony DeRobertis wrote:
> Theodore Ts'o wrote:
> > (for example if the US Congress
> > changes the definition of daylight savings time), 
> 
> That should be "when", not "if", unfortunately. AFAIK, they've already
> done it.
> 
> On my system, /bin, /etc, /lib, and /sbin together are 156M;
> /usr/share/zoneinfo is 5.5M. So, while a 3.5% increase in the size of /
> would fix it, it seems rather wasteful for the need of ~1K.
> 
> Maybe just copy (in, e.g., postinst) the one file needed to
> /lib/zoneinfo, and create the symlink to that. It really shouldn't be in
> /etc; binary files do not belong there.

I was only proposing to copy the one file.  I don't think it's quite
so important to put it in /lib and then put a symlink from
/etc/localtime to /lib/localtime.  There _are_ other binary files in
/etc.  Just do:

find /etc -type f | xargs file  | grep data

and you'll find files such as 

/etc/apt/trusted.gpg
/etc/ld.so.cache
/etc/prelink.cache

...as well as image files, PPD files, pcmcia data files, and many
others.

Specifically, what I would propose is /etc/localtime.conf contain
something like "US/Eastern", and let /etc/zoneinfo be a copy of the
file /usr/share/zoneinfo/`cat /etc/zoneinfo`.

Does anyone have any objections with this proposal?

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: [EMAIL PROTECTED]: Re: Bug#343662: fsck errors halting boot after upgrade]

2005-12-23 Thread Theodore Ts'o
On Tue, Dec 20, 2005 at 01:59:55PM -0600, Steve Greenland wrote:
> On 19-Dec-05, 09:21 (CST), Theodore Ts'o <[EMAIL PROTECTED]> wrote: 
> > Specifically, what I would propose is /etc/localtime.conf contain
> > something like "US/Eastern", and let /etc/zoneinfo be a copy of the
> > file /usr/share/zoneinfo/`cat /etc/zoneinfo`.
> 
> Um, /usr/share/zoneinfo/`cat /etc/localtime.conf`, right?

Yes, of course.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Need for launchpad

2006-01-15 Thread Theodore Ts&#x27;o
On Fri, Jan 13, 2006 at 12:04:46PM +0100, Thomas Hood wrote:
> I don't think that patches-submitted-to-the-BTS is a good way to
> measure how much Ubuntu is contributing to Debian.  Ubuntu's patches
> are readily available:
> 
> http://people.ubuntulinux.org/~scott/patches/

I looked at the patches for e2fsprogs, and I have to conclude that
unfortunately, they patches are worse than useless.  It's not clear
exactly what is being diffed against what, but if I had to guess it's
a diff of Debian stable or Debian testing versus the latest in Ubuntu
unstable --- or whatever is their development branch.  

Why do that say that?  Because the vast majority of the patch is my
own latest changes made to the Debian unstable package.  i.e., just to
show you a diff from changelog file:

diff -pruN e2fsprogs_1.38-1.1/debian/changelog 
e2fsprogs_1.38-2ubuntu1/debian/changelog
--- e2fsprogs_1.38-1.1/debian/changelog 2005-12-06 13:39:00.0 +
+++ e2fsprogs_1.38-2ubuntu1/debian/changelog2005-11-09 01:11:17.0 
+
@@ -1,3 +1,32 @@
+e2fsprogs (1.38-2ubuntu1) breezy; urgency=low
+
+  * Merge with Debian.  (Ubuntu #13757)
+  * Remove tests/f_bad_disconnected_inode/image.gz to be able to build the
+package.  This will (hopefully) be in the next upstream version and is
+just used for testing.
+
+ -- Tollef Fog Heen <[EMAIL PROTECTED]>  Tue, 23 Aug 2005 10:42:10 +0200
+
+e2fsprogs (1.38-2) unstable; urgency=low
+
+  * Previous NMU acknowledged (Closes: #317862, #320389)
+  * Fix debugfs's set_inode_fields command so it doesn't silently fail
+when setting certain inode fields.
+  * Fix e2fsck from segfaulting on disconnected inodes that contain one or
+more extended attributes.  (Closes: #316736, #318463)
+  * Allow mke2fs and tune2fs to take fractional percentages to the -m
+option in mke2fs and tune2fs.  (Closes: #80205)
+  * Fix a compile_et bug which miscount the number of error messages if
+continuations are used in the .et file, and fix compatibility problems
+with MIT Kerberos 1.4
+  * Add extra sanity checks to protect users from unusual cirucmstances
+where /etc/mtab may not be sane, by checking to see if the device is
+reported busy (works on Linux 2.6) kernels.  (Closes: #319002)
+  * Fix use-after-free bug in e2fsck when finishing up the use of the
+e2fsck context structure.
+
+ -- Theodore Y. Ts'o <[EMAIL PROTECTED]>  Sun, 21 Aug 2005 23:35:29 -0400
+



And on _top_ of that, we have all sorts of gratuitous autotools
changes.

This is roughly equivalent to submitting a patch to LKML with all
sorts of gratuitous whitespace cleanups mixed in with real,
substantive changes in a garguantuan monolithic patch, _and_ including
all of the changes between 2.6.14 and 2.6.15 in the patch that you
submit expecting the kernel developers to review it.  Go ahead, try
it.  I dare you.  :-)

> If they were submitted to the BTS then that would just create more work
> for the Debian maintainer as well as for the Ubuntu maintainer, since
> the former would have to tag the report and ensure it gets closed on
> the next upload, etc.  

I would much prefer that; at worse I can always close out the BTS
entry if I disagree with the patch with a wontfix.  But at least I
would see it.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Need for launchpad

2006-01-15 Thread Theodore Ts&#x27;o
On Sun, Jan 15, 2006 at 01:54:09PM -0600, Manoj Srivastava wrote:
> Could you then take my name off as being reponsible for
>  software that this diverse group of people have modified, if the
>  modifications are more than cosmetic?  Also, I would like the bug
>  reports to be triaged and forwarded to me, so I know of problems in
>  my work.
> 
> On the internet all you have is your reputation. Keeping my
>  name on software that is different from what I have produced, and not
>  telling me of problems people may have found in my product, harms my
>  reputation.

While I don't disagree with this sentiment, keep in mind that Debian
itself is sometimes guilty of adding changes to packages when the
upstream may or may not approve.  Of course, we'll justify by saying
that "users want it", or that it is in "the best interests of the
users", but isn't that exactly the same excuse used by Ubuntu?

I can give a couple of examples; one is way back when, before I took
over the maintenance of the e2fsprogs package, and was merely the
upstream author.  The then maintainer of e2fsprogs attempted to add
support for filesystems > 2GB, but botched the job, and the result was
people with filesystems > 2GB would in some circumstances, get their
filesystems trashed.  Of course, those people complained directly to
me, and the reputation of e2fsprogs took a hit as a result.  I was
pissed, but I was informed there was nothing I could do; the
maintainer of the package can do whatever they want, upstream wishes
be d*mned, unless you try to go through a rather painful appeal
process via a then-relatively inactive technical committeee.

More recently, Fedora attempted to add on-line resizing, but botched
the job, so that if you attempted to use resize2fs (the off-line
resizing tool) on any filesystems created by Fedora, the result was a
corrupted filesystem.  Again, people complained directly to me, not to
Fedora, and I was upset, but there wasn't much I could do other than
clean up after the mess made by Fedora.

Of course, you can claim that the users should have complined directly
to their distribution, just as Ubuntu users should have complained to
Ubuntu, and not to the Debian maintainer --- but users are users, and
they tend not do that.  More generally, as long as distributions make
any changes to upstream code --- which is inevitable --- there is
always the risk of sullying the reputation of upstream.  We do when we
make changes to the upstream sources of our packages, so it's probably
fair to be a bit understanding when the roles are reserved and we are
the upstream and Ubuntu is the downstream.  After all, the stick in
one's own eye is always harder to see than the spec in another's

- Ted

P.S.  That doesn't change the fact that the I think the Ubuntu patches
are useless, and I'd generally much rather be trying to merge
distro-specific patches from Red Hat's RPM's than from Ubuntu's diff
files.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Need for launchpad

2006-01-15 Thread Theodore Ts&#x27;o
On Sun, Jan 15, 2006 at 03:12:33PM -0800, Thomas Bushnell BSG wrote:
> Actually, upstream maintainers have no voice before the technical
> committee, which exists to resolve disputes between Debian developers,
> not between Debian developers and outsiders.

Indeed.  And likewise, we have absolutely no control over what Ubuntu
chooses to distribute, either.

> The question here is *NOT* whether Ubuntu has good patches, but
> whether they contribute back, via the BTS, patches which are relevant
> to the Debian upstream.

Actually, Manoj raised the issue of not wanting his name on packages
being modified by a committee since bugs may harm his reputation.  I
have in the past had my reputation harmed by people who screwed up
e2fsprogs at various distributions.  And there was absolutely nothing
I could do about it.  As you pointed out, before I became a DD
developer, I had absolutely no standing whatsoever to protest someone
who screwed up my package and damaged my reputation.  

So if that's our formal distribution of power between our upstreams
and our Debian Developers, why are we complaining about how Ubuntu
treats us?

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Need for launchpad

2006-01-16 Thread Theodore Ts&#x27;o
On Mon, Jan 16, 2006 at 12:44:01AM -0800, Thomas Bushnell BSG wrote:
> I think this is not quite true.  In any case, my recollection was that
> the bad cooperation was a two-way street, with you being extremely
> reluctant to acknowledge the concerns and needs of distributions, and
> on the other side, distributions disregarding your requests about how
> the package should be modified or installed.

If that means I wasn't ready to accept a patch which *wasn't* *ready*
*yet*, and people went ahead and installed a patch which I rejected is
evidence of my "relectant to acknowledge the concerns and needs of
distributions", maybe.  When Debian users started having their
filesystems getting corrupted, it was proved that I was right, didn't
it?

> > So if that's our formal distribution of power between our upstreams
> > and our Debian Developers, why are we complaining about how Ubuntu
> > treats us?
> 
> I would be happy to agree that Debian did not cooperate well with you
> with respect to the past history of e2fsprogs.
> 
> Ubuntu claims to cooperate well with Debian.  That's the problem.

Free speech is a b*tch, isn't it?  Debian at the time claimed that
everything was being done in the interests of the users.  It wasn't
true, but hey, the the only way we can counter free speech is with
more speech.  So if we believe that Ubuntu is not cooperating well
with Debian, then Debian should issue a formal statement listing how
Ubuntu is failing to cooperate well with Debian.  Of course, how the
press release is worded will be critical in determining how people
outside of Debian will perceive us as a result.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Need for launchpad

2006-01-16 Thread Theodore Ts&#x27;o
On Mon, Jan 16, 2006 at 12:06:29PM +0100, Moritz Muehlenhoff wrote:
> Theodore Ts'o wrote:
> > I can give a couple of examples; one is way back when, before I took
> > over the maintenance of the e2fsprogs package, and was merely the
> > upstream author.  The then maintainer of e2fsprogs attempted to add
> > support for filesystems > 2GB, but botched the job, and the result was
> > people with filesystems > 2GB would in some circumstances, get their
> > filesystems trashed.  Of course, those people complained directly to
> > me, and the reputation of e2fsprogs took a hit as a result.  I was
> > pissed, but I was informed there was nothing I could do; the
> > maintainer of the package can do whatever they want, upstream wishes
> > be d*mned, unless you try to go through a rather painful appeal
> > process via a then-relatively inactive technical committeee.
> 
> If it lured you into becoming a DD we should mess up more upstream code :-)

So obviously by that logic Ubuntu is doing the right thing by luring
all Debian developers to become Ubuntu in order to protect their
reputation?  :-)

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: limitations of reportbug and BTS

2006-02-17 Thread Theodore Ts&#x27;o
On Thu, Feb 16, 2006 at 07:09:13PM +0900, Miles Bader wrote:
> Eduard Bloch <[EMAIL PROTECTED]> writes:
> > Or the search machine of the choice for those who do not trust Google.
> 
> I think most of those types are holed up in a bunker cradling a machine
> gun.

Or live in China.  :-)

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Adding dependencies to e2fsprogs: libdevmapperr, libselinux and libsepoll

2006-03-08 Thread Theodore Ts&#x27;o

I have recently received a patch which allows the blkid library to
properly handle device mapper partitions.  The problem is in order to do
this, I have to link in libdevmapper, and by extension libselinux and
libsepoll.  Since these libraries would depend on the blkid library,
which is used by fsck, e2fsck, and other e2fsprogs programs, this
essentially would drag in these libraries into everybody's systems.

Is there any objections to my uploading a new e2fsprogs package which
does this?

Thanks, regards,

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Adding dependencies to e2fsprogs: libdevmapperr, libselinux and libsepoll

2006-03-09 Thread Theodore Ts&#x27;o
On Wed, Mar 08, 2006 at 11:07:24PM -0600, Peter Samuelson wrote:
> 
> [Michael Banck]
> > Please take into consideration that libselinux is not available on
> > Debian's non-Linux ports.
> 
> It's not libselinux you should be worried about, but libdevmapper.
> He's not depending on libselinux directly, but he notes that on Linux
> systems, the dependency chain will pull it in.

Actually, because of the e2fsck-static package, e2fsprogs has to have
a build-depends on libselinux.  There doesn't seem to be a way to say,
"except on non-Linux platforms" for a build-depends as far as I know,
unfortunately.  Any suggested solutions?

- Ted



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Adding dependencies to e2fsprogs: libdevmapperr, libselinux and libsepoll

2006-03-09 Thread Theodore Ts&#x27;o
On Thu, Mar 09, 2006 at 04:38:27PM +0100, Goswin von Brederlow wrote:
> > Actually, because of the e2fsck-static package, e2fsprogs has to have
> > a build-depends on libselinux.  There doesn't seem to be a way to say,
> > "except on non-Linux platforms" for a build-depends as far as I know,
> > unfortunately.  Any suggested solutions?
> 
> List the linux platforms. It is more likely some new non-linux
> platform shows up (like armeb, kfreebsd-amd64, ...) than a new linux
> one.

That's. unspeakably horrible.

What we really need is a separation between "OS" and "Architecture" as
far as dpkg is concerned.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Adding dependencies to e2fsprogs: libdevmapperr, libselinux and libsepoll

2006-03-09 Thread Theodore Ts&#x27;o
On Thu, Mar 09, 2006 at 01:54:16PM -0800, Steve Langasek wrote:
> > > List the linux platforms. It is more likely some new non-linux
> > > platform shows up (like armeb, kfreebsd-amd64, ...) than a new linux
> > > one.
> 
> > That's. unspeakably horrible.
> 
> > What we really need is a separation between "OS" and "Architecture" as
> > far as dpkg is concerned.
> 
> Yes, the dpkg maintainers have a patch to do that by extending the semantics
> of the Architecture: field.  Other tools need to support the same extensions
> before it can really be used, though.

So until we have it, maybe the right answer would be to create "no-op"
libselinux1-dev and libsepol1-dev packages which can be used to satisfy
the build-depends on our non-Linux ports?

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: *** SPAM *** Re: NEW handling: About rejects, and kernels (Was: Re: NEW handling ...)

2005-03-23 Thread Theodore Ts&#x27;o
On Mon, Mar 21, 2005 at 04:24:41PM +, Matthew Wilcox wrote:
> The Vancouver meeting summary upset me, not because of the proposals
> to drop architectures, but because it contained a reminder of the
> Social Contract changes.  The project is moving to what I believe to
> be a ridiculously extremist position.  I can't support the new Social
> Contract, and wouldn't sign up for it if I were going through NM right
> now.  So the only honourable thing for me to do is resign at the point
> when it come into effect.
> 
> It saddens me greatly that we've come to this situation.  I've been
> proud to be a Debian Developer for the past 6 years.  I'd like to say,
> as others have when resigning, that I will continue to run Debian on my
> machines, but I can't.  Moving documentation to non-free makes Debian
> a less suitable distribution for me.  I shall have to look around and
> see what other distributions suit my needs.

The way that I deal with this from a personal point of view is to
remind myself that non-free is supported by Debian-the-organization,
even if it is not formally "part of the Debian distribution".
Semantic games, but unfortunately Debian seems to be more focused on
flame wars about semantics than actually shipping code and
documentation that meets the needs of its users.

If the free software fanatics succeed in kicking non-free from being
supported by Debian assets, such that the FSF documentation were no
longer available, I'd probably end up agreeing with you and probably
would do what you are considering to do after sarge ships.  

If it would help, I'd ask you to reconsider.  If all the reasonable
moderates leave, then all that will be left will be the extremists.

Regards,

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Urgently need GPL compatible libsnmp5-dev replacement :-(

2005-05-03 Thread Theodore Ts&#x27;o
On Tue, May 03, 2005 at 07:06:36PM -0700, Steve Langasek wrote:
> 
> The license of the GNUTLS OpenSSL shim is GPL, causing possible license
> problems in the other direction with GPL-incompatible apps.  It's also not a
> very complete compatibility layer.
> 

So dynamically link against _an_ SSL library, using dlopen(), and this
completely trumps the issue.  The fact that there are multiple
libraries, implementing the OpenSSL interface, means that as long as
the application calls the *interface* it can't be derived from
*either* library, and it escapes the license incompatibility issues.
(Remember, license compatibility can only be an issue if the program
can be shown to be a derived work of a particular library.  If it is
calling an interface which is implemented by more than one library,
then clearly it can't be a derived work, and once it is not a derived
work, copyright law by definition can't apply.)

Example: The libss library searches for the readline, editline, and
libedit libraries via a search path, and dlopen()'s the first one it
can find.  It the calls those interfaces to get readline
functionality.  The Solaris SEAM (Solaris Enterprise Authentication
Mechanism), which is a propietary program which is derived from the
MIT Kerberos V5 sources, also happens to call the libss library, with
which it is dynamically linked.  

Yet now when you run the Kerberos administration CLI program from
SEAM, and install the newer version of libss library so that it
dynamically links with it, and then the libss library could possibly
dlopen the GNU readline library  you can a process containing
propietary Solaris code, BSD licensed libss code, and GPL'ed readline
library, all in the same address space.  But has there been a GPL
violation?  No, since, the only time a derivitive work can
conclusively shown to be created is when the user ran the kadmin
program, and the GPL does not restrict use, only distribution.

Could the kadmin program be considered a derived work of the readline
library?  No, because it was written to call libss *years* ago, long
before libss was modified to potentially call the readline library.
The kadmin program called the libss *interface*, and at the time the
author of the kadmin program had no idea that it might subsequently
end up calling a GPL'ed library indirectly via libss.  And
furthermore, the BSD-licensed libss program does not even directly
link against the readline library, but rather uses dlopen() and
dlsym() to call a particular *interface* which could be satisified
either by a GPL or BSD licensed library.  So how can you say that the
libss program is a derivitive work of either library?

I believe you can use a similar solution to solve the openssl library
problem.  If there is a shim layer, and the application uses a search
path to find a library which it then dlopen()'s, this should
completely trump the license compatibility issue, since in this case
it is clear that it is not a derived work of any one particular
library, but rather it is calling an interface which can be satisified
by multiple libraries, and which library will get used can only be
determined at run-time.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Package priorities: optional vs extra

2005-07-06 Thread Theodore Ts&#x27;o
On Mon, Jul 04, 2005 at 04:06:22PM -0500, Peter Samuelson wrote:
> 
> [Lionel Elie Mamane]
> > I recently found some packages in at an IMHO totally wrong priority
> > in Debian.
> 
> Yeah.  I've been grumbling about optional vs. extra for years.  Nobody
> wants to consider his own packages 'extra' because every maintainer
> feels his own packages are Really Useful.  This is a side effect of
> common human hubris, and it's probably pointless to fight it.

How about a policy change where for all packages that have been in
unstable for a year or more, the results of popcon will dictate
whether the package is considered "extra" or "optional"?

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: GCC version change / C++ ABI change

2005-07-06 Thread Theodore Ts&#x27;o
On Mon, Jul 04, 2005 at 11:39:59AM +0100, Jon Dowland wrote:
> > It is my believe that the 2.4 kernel is still in wide spread use
> > both indide and outside Debian, thats a cause for being concerned
> > about it in my books.
> 
> Indeed, its the kernel shipped with RHEL 3.x .

Sort of.  2.4 kernels have generally been patched by most
distributions to the point where they are hardly recognizeable.  Both
Red Hat and SuSE have backported _so_ many 2.5/2.6 features into their
"2.4 kernel" that you generally can't boot a kernel.org 2.4 kernel on
their systems.  Since all of the distributions have forked so far from
the mainstream kernel, and most of the kernel developers are focusing
on 2.6, most 2.4 maintenance takes place within the various
distributions.  It's therefore up to the Debian kernel team whether
they feel like supporting 2.4 or not.  

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: What makes a debconf?

2003-05-28 Thread Theodore Ts&#x27;o
On Sat, May 24, 2003 at 01:41:56AM -0500, Branden Robinson wrote:
> 
> True enough, but since USENIX took over Atlanta Linux Showcase, ran it
> for one year, and then shot it in the back of a head like a drug kingpin
> assassinating an unwanted lieutenant, Debian developers in the U.S.,
> particularly the southeast, have been missing a bit of an opportunity for
> a gathering.
> 
> I really miss ALS.

It's a bit more complicated than that.  Jon 'Maddog' Hall strongly
encouraged the folks who ran ALS to team up with Usenix, and to try
moving the show around to different parts of the country, with the 'A'
in ALS changed from "Atlanta" to "Annual".  The first such
collaboration happened in Atlanta, and the second happened in Oakland,
California, in 2001.  

Unfortunately, at some level, ALS's business model was fundamentally
flawed.  It relied on the trade show floor subsidizing everything
else.  This worked fine during the dot.com boom, when money flowed
like water, but by 2001, VA Linux had dumped its hardware business and
switched to a propietary software model, Linuxcare had gone belly up,
Turbolinux was pretty much gone, etc.  So I remember going to the
show, and noting that one (the only?) "Platinum" trade show sponsor
--- Redhat --- thought the show so unimportant that even though they
had paid $$$ to be a Platinum sponsor, their booth consisted of an
unadorned table two boxes of Red Hat and Red Hat Advanced Server, with
no one even bothering to staff the booth.

To make matters worse, the 2001 ALS happened two months after 9/11,
which meant a lot of people cancelled travel, and so Usenix wasn't
able to make the hotel room block guarantees.  The bottom line was
that Usenix lost half a million dollars on that show.

After that point, a post-mortem was done on the show, and it was
pretty much agreed that the ALS business model was pretty not going to
work going forward, and that at best, the only thing which made sense
was that it go back to its roots as a small regional show.  Most
vendors don't have much interest in going to a Linux-specific trade
show, these days.  The last one to exist is Linux World Conference and
Expo in NYC and San Francisco, and it's not too clear they will
continue to exist 2-3 years from now.  Most of the exhibitors at LWCE
are companies that also go to Comdex and other big trade shows anyway,
and the customers they want to sell aren't necessarily going to be at
a Linux-specific trade show.

However, the ALS organizers were pretty tired and burned out, and so
they decided not to do another show in 2002.  It certainly would be
great for there to be more regional Linux shows, although my guess is
that they will have to be much smaller affairs than ALS has been in
the past.  If people in the southwest are interested in running one, I
suspect the ALS "old-timers" would be ecstatic, and would be happy to
dispense words of wisdom

- Ted




Re: debootstrapping and sysvinit

2003-07-01 Thread Theodore Ts&#x27;o
Miquel,

It is certainly true that sysvinit is an important package, and as
such, it requires either frequent care and attention to deal with bugs
(and you have a lot[1] of open bugs against the sysvinit package).
For better or for worse, the release history of sysvinit has not been
one which has been characterized by "release early and often".

When I offered to help earlier (because I had an important e2fsprogs
bug that I couldn't close because it was blocked on a sysvinit bug),
you admitted that you were pretty busy these days.  Perhaps it would
be useful for you to accept help in the form of a co-maintainer for
the package?  Sysvinit is important enough that this might be
considered a good idea

- Ted

[1] 121 bugs, of which 56 are Important/Normal, and 61 are
minor/wishlist, and 4 are fixed/pending.




Re: Please remove RFCs from the documentation in Debian packages

2003-07-03 Thread Theodore Ts&#x27;o
On Thu, Jul 03, 2003 at 10:03:47AM +0200, Javier Fern?ndez-Sanguino Pe?a wrote:
> (For those who are not aware of this issue, please read #92810)
> 
> Since the doc-rfc packages have been moved to non-free, I have just cloned
> the doc-rfc RC bug (#92810) and assigned it to some other packages which
> provide RFCs (for a full list see the the bug report, but more might be
> affected). I advise maintainers which include RFCs in their packages to
> remove the RFC documentation from them.

Note that ISOC is not granted an exclusive copyright license.
Therefore, one option that is open to a maintainer is to try to
contact the original author of the RFC, and ask for permission to
redistribute under a DFSG-compliant license.

This obviously won't work for the entire RFC series, but if it is
extremely important to include a particular RFC in a package for
documentation purposes, this is one way to accomplish it. 

Also, as already has been pointed out, some of the early RFC's do not
have the objectionable ISOC copyright terms in them.

- Ted




Re: Debconf or not debconf : Conclusion

2003-07-03 Thread Theodore Ts&#x27;o
On Thu, Jul 03, 2003 at 04:49:19PM -0400, Joey Hess wrote:
> 
> If I ever add filtering to the notes debconf allows to be displayed,
> notes that refer the user to README.Debian will be at the top of the
> list to never be displayed.
> 
> Of course, I am much more likely to bow to the pressure of notes like
> the one you're apparently adding, and completly disable all notes at
> some point, rather than adding filtering. I don't like arms races.
> 

After seeing multiple attempts to use social pressure to encourage to
stop the flood of debconf misusage, it's at times like this that I
sometimes think Eric Troan really got this part of rpm's design right
(some 7 or 8 years ago) when he completely forbade any I/O between the
install scripts and the user at install time.  As he put it,
(paraphrased since I don't remember his exact wording) if even a small
percentage of packagers indulge their desire to put up dialog boxes,
the system will become extremely annoying.  How prophetic he was ---
or rather, how well he understood human nature.

Everybody believes that *their* package has something ***so***
important to say that they have to tell the whole world about it.  And
perhaps I'm being too pessimistic, but trying to fix this by social
pressure is like trying to shame American soccer mom's into not
driving gasoline-gulping SUV's.  It's never going to work.

If you want to fix the problem, you have the right idea by thinking
that you should perhaps simply disable all notes.  That's the only
solution that will stop the flood of warning messages and noices.
(And perhaps by removing this crutch, packagers will be more
encouraged not to grauitiously break things as the result of package
upgrades, even if upstream does something stupid.)

On a separate but related topic, I think a much better approach would
be to handle configuration as a step entirely separate from the
install phase.  Let the install be entirely quiet, and let packages
have intelligent defaults.  If the package absolutely must be
configured before it can be used, then let it be non-functional until
someone actually calls dpkg-configure (which would be just like
dpkg-reconfigure except that's the only time the questions would be
asked).

- Ted




Re: Debconf or not debconf : Conclusion

2003-07-05 Thread Theodore Ts&#x27;o
On Sat, Jul 05, 2003 at 05:05:01PM +1000, Anthony Towns wrote:
> The point of decoupling installation and configuration is to let the admin
> choose which of these scenarios happen, instead of the distribution or
> the maintainer. The first is appropriate if you're doing installs of many
> systems (work out how you want it to look, then slam it onto all of them
> automatically), the second if you're doing an upgrade from aptitude, and
> the third if you've blatted a standard install from a magazine cover-CD
> and need to do some final configuration.

Yet another reasons for wanting to decouple installation and
configuration is if some hardware company (such as VA^H^H Emperor
Linux) wishes to ship Debian pre-installed on the system.  In that
case, installation happens at the factory, and not when the user
receives it in his/her hot little hands.

- Ted




Re: Bug#200153: ITP: e2tools -- utilities for manipulating files in an ext2/ext3 filesystem

2003-07-06 Thread Theodore Ts&#x27;o
On Sat, Jul 05, 2003 at 11:57:35PM +0200, Falk Hueffner wrote:
> Ralf Treinen <[EMAIL PROTECTED]> writes:
> > > E2tools is a simple set of GPL'ed utilities to read, write, and
> > > manipulate files in an ext2/ext3 filesystem.
> > 
> > please excuse my ignorance - what would be the advantage of these
> > tools over the core file utilities which use the VFS layer?
> 
> You don't need root. Useful for example to build rescue floppy images.

Actually, you can do this with debugfs.  That's how Real Men (tm)
build their initial bootstrap images.  (i.e., Linus Torvalds, when he
was first bootstrapping the Alpha port.)

That being said, the e2tools does have an easier to use user interface
than debugfs, and reduces the chance that user will harm himself; sort
of like the difference between using an Exacto knife and one of those
scissors with rounded ends that gets handed out to pre-schoolers.  :-)
That's not a bad thing, and certainly I'm not saying it shouldn't be
packaged.  In fact, I think it's a good thing, just as like the mtools
suite is useful even though we have the msdos filesystem in the
kernel.

However, I've taken a quick look at it, and I do have some
warnings:

*) It looks like upstream hasn't released a new version since July or
August 2002.  Before that, releases were happening regularly.  Is the
author Keith Sheffield (cc'ed on this note), still maintaining the
package?  

*) It badly needs to be autoconfiscated.  

*) Currently it hardcodes the path to static libraries in
../e2fsprogs-1.27/lib/...; it should use the shared version of the
libraries.

*) It's missing man pages

*) It needs testing against a variety of newer versions of ext2fs to
make sure the code actually works well with filesystems with htree
filesystems and extended attributes.  Upon brief inspection, I can see
that it's not dropping the refcount on the extended attribute block
when deleting a file, and freeing the extended attribute block when
the refcount goes to zero.

This is no shame; it merely shows the age of the code, and to be
honest, the rm function in debugfs doesn't do this correctly either
--- I'll put that on my "to fix" list.  However, there is the implicit
promise thatuserland packages such as e2tools will work correctly,
whereas debugfs sets a much lower level of expectations, since it's
assumed to be a wizard-level tool, with sharp pointy edges upon which
naive users can hurt themselves if they are not careful.

What it *does* show is that e2tools badly needs a test suite, which
takes a bunch of filesystems, does various e2tools operations on it,
and then runs e2fsck on the filesystem to make sure the resulting
filesystem is still valid.  I have not run any tests on it, so I can't
be sure the extended attribute block handling is the only issue.  I
*think* it should be just fine with htree directories, since the
libext2fs library handles a lot of the issues automatically.  But
until you run tests --- preferably automated test suites --- you can
never be sure.


The bottom-line is that e2tools shows a lot of promise, but there's
also a lot of work that could be put into it in order to improve its
quality.  Perhaps upstream could be convinced to tackle some of the
work, or at the very least, accept patches that you feed back to him.

- Ted




Re: A success story with apt and rsync

2003-07-06 Thread Theodore Ts&#x27;o
On Sun, Jul 06, 2003 at 10:12:03PM +0100, Andrew Suffield wrote:
> On Sun, Jul 06, 2003 at 10:28:07PM +0200, Koblinger Egmont wrote:
> > Yes, when saying "random order" I obviously ment "in the order readdir()
> > returns them". It's random for me.  :-)))
> > 
> > It can easily be different on different filesystems, or even on same
> > type of filesystems with different parameters (e.g. blocksize).
> 
> I can't think of any reason why changing the blocksize would affect
> this. Most filesystems return files in the sequence in which they were
> added to the directory. ext2, ext3, and reiser all do this; xfs is the
> only one likely to be used on a Debian system which doesn't.

Err, no.  If the htree (hash tree) indexing feature is turned on for
ext2 or ext3 filesystems, they will returned sorted by the hash of the
filename --- effectively a random order.  (Since the hash also
includes a secret, random, per-filesystem secret in order to avoid
denial of service attacks by malicious users who might otherwise try
to create huge numbers of files containing hash collisions.)

I would be very, very surprised if reiserfs returned files in creation
order.  The fundamental problem is that the
readdir()/telldir()/seekdir() API is fundamentally busted.  Yes,
Dennis Ritchie and Ken Thompson do make mistakes, and have made many;
in this particular case, they made a whopper.  

Seekdir()/telldir() assumes a linear directory structure which you can
seek into, such that the results of readdir() are repeatable.  Posix
only allows files which are created or deleted in the interval to be
undefined; all other files must be returned in the same order as the
original readdir() stream, even if days or weeks elapse between the
readdir(), telldir(), and seekdir() calls.

Any filesystem which tries to use a B-tree like system, where leaf
nodes can be split, is going to have extreme problems trying to keep
these guarantees.  For this reason, most filesystem designers choose
to return files in b-tree order, and *not* the order in which files
were added to the directory.

It is really, really bad assumption to assume that files will be
returned in the same order as they were created.

> On ext2, as an example, stat()ting or open()ing a directory of 1
> files in the order returned by readdir() will be vastly quicker than
> in some other sequence (like, say, bytewise lexicographic) due to the
> way in which the filesystem looks up inodes. This has caused
> significant performance issues for bugs.debian.org in the past.

If you are using HTREE, and want to do a readdir() scan followed by
something which opens or stat's all of the files, you very badly will
want to sort the returned directory inodes by the inode number
(de->d_inode).  Otherwise, the order returned by readdir() will be
effectively random, with the resulting loss of performance which you
alluded to because the filesystem needs to randomly seek and ready all
around the inode table.

Why can't this be done in the kernel?  Because if the directory is 200
megabytes, then kernel would need to allocate and hold on to 200
megabytes until the userspace called closedir().  There is simply no
lightweight way to work around the problems caused by the broken API
which Ken Thompson and Dennis Ritchie designed.

The good news is that this particular optimization of sorting by inode
number should work for all filesystems, and should speed up xfs as
well as ext2/3 with HTREE.

- Ted




Re: A success story with apt and rsync

2003-07-06 Thread Theodore Ts&#x27;o
On Sun, Jul 06, 2003 at 11:36:34PM +0100, Andrew Suffield wrote:
> 
> I can only presume this is new or obscure, since everything I tried
> had the traditional behaviour. Can't see how to turn it on, either.
> 

It's new for 2.5.  Backports to 2.4 are available here:

http://thunk.org/tytso/linux/extfs-2.4-update/extfs-update-2.4.21

For those who are interested, the broken out patches can be found here:

http://thunk.org/tytso/linux/extfs-2.4-update/broken-out-2.4.21/to-apply

Once you have a htree-enabled kernel, you enable a filesystem to use
the feature by using the following command:

tune2fs -O dir_index /dev/hdXX

Optionally, you can reorganize all of the directories to use btrees by
using the command "e2fsck -fD /dev/hdXX".  Otherwise, only directories
that are expanded beyond a single block after you set the dir_index
flag will use htrees.  The dir_index is a fully compatible extension,
so it's perfectly safe to mount a filesystem with htrees on a
non-htree kernel.  A non-htree kernel will just ignore the b-tree
information, and if it attempts to modify a hash-tree directory, it
will just invalidate the htree interior node information, so that the
directory becomes unindexed until e2fsck -fD is run over the
filesystem to which optmizes all of the directories by reindexing them
all.

Why would you want to use htrees?  Because they speed up large
directories.  A lot.  Try creating 400,000 zero-length files in a
single directory.  It will take under 30 seconds with htree enabled,
and well over an hour without.

> > The good news is that this particular optimization of sorting by inode
> > number should work for all filesystems, and should speed up xfs as
> > well as ext2/3 with HTREE.
> 
> What about ext[23] without htree? Mucking with the order returned by
> readdir() has historically caused problems there...

It'll be fine; in fact, in some cases you'll see a slight speed up.
The key is that you'll get the best performance by reading/modifying
the inode data structures in sorted order by inode number.  This way,
you make a single sweep through the inode table, without needing any
extraneous seeks.  Using the natural sort order of readdir() on
non-htree ext2/3 systems mostly approximated this --- although if
files are deleted and created from the directory, this is not
guaranteed.  So sorting by inode number will never hurt, and may help.

- Ted




Re: A success story with apt and rsync

2003-07-06 Thread Theodore Ts&#x27;o
On Mon, Jul 07, 2003 at 01:01:34AM +0100, Andrew Suffield wrote:
> > 
> > I believe htree == dir_index, so tune2fs(8) and mke2fs(8) have the answer.
> 
> My /home has that enabled and readdir() returns files in creation order.
> 

Then you don't have a htree-capable kernel or the directory isn't
indexed.  Directories that fit in a block are not indexed, as are
directories larger than a block that were created before directory
indexing was enabled, or if they were modified by a non-htree capable
kernel.

You can use the lsattr command to see if the indexed (I) flag is set
on a particular directory:

% lsattr -d /home/tytso
--I-- /home/tytso

- Ted




Re: Work-needing packages report for Jul 11, 2003

2003-07-11 Thread Theodore Ts&#x27;o
On Fri, Jul 11, 2003 at 11:15:31PM -0500, Graham Wilson wrote:
> On Fri, Jul 11, 2003 at 08:49:46AM +0200, Marcelo E. Magallon wrote:
> > On Fri, Jul 11, 2003 at 12:33:22AM -0400, Work Needing Prospective
> > Packages wrote:
> >  >judy (#172772), orphaned 210 days ago
> >  >  Description: C library for creating and accessing dynamic arrays
> >  >  Reverse Depends: libjudy-dev
> > 
> >  I thought that bogus bogofilter depended on this for building...
> 
> bogofilter used to use this, but doesnt any longer. anybody opposed to
> removing it?

Looking at the documentation, this looks like a very interesting
library.  It would be a shame to lose it from Debian, and the package
looks like it builds cleanly and has no bug.  I'd be willing to adopt
it.

- Ted




Re: Work-needing packages report for Jul 11, 2003

2003-07-12 Thread Theodore Ts&#x27;o
On Fri, Jul 11, 2003 at 11:58:55PM -0700, Joshua Kwan wrote:
> On Sat, Jul 12, 2003 at 08:25:57AM +0200, Thomas Viehmann wrote:
> > There's someone on d-mentors wanting to adopt this. As in the BTS:
> > 
> > Debian Bug report logs - #172772
> > ITA: judy -- C library for creating and accessing dynamic
> 
> Oh dear, Ted T'so just uploaded it and assumed maintainership...

I'm not on d-montors, and no one had noted this fact in BTS.   Sorry.

I assume what was meant was that a prospective DD was interested in
adopting the package?  Does that person have a sponsor?  If so, could
the sponsor contact me?  We can probably work something out.

- Ted




Re: Bits from the RM

2003-08-20 Thread Theodore Ts&#x27;o
On Wed, Aug 20, 2003 at 01:26:17PM +0200, cobaco wrote:
> > and why on stable you do not expect a stable KDE?  
> kde 3.2. will be the stable kde release come 8 december

The reality is that if KDE 3.2 is stable in KDE, and it entered
testing on December 8th, it would probably delay the release of sarge
by at least 6 weeks just so we can make sure the Debian build of KDE
3.2 was bug-free.  And then in the meantime, no doubt some other
significant package (GNOME, perhaps?) would become "stable", and
people would agitate for us to hold up the release for another 6
weeks, OR developers would take the opportunity to destablize the
release further because they see the vast major, gaping exception to
the freeze schedule set forth by the RM.  AND this all assumes that
KDE actually hits their release schedulate!

Bad ju-ju.

The real problem is that stable has a reputation of taking years and
years before we manage to do a release, so people are always desperate
to shove every last bit of functionality and new upstream release into
it.  What folks don't realize is that makes the problem worse, not
better, by stretching out the release schedule.

Better to have a hard freeze schedule, and then try to turn out new
stable releases every 6-9 months.  Then folks won't be so desperate to
shove new things in and screw up the release.  The problem, though, is
the first such attempt take release schedules seriously and
agressively (a) a really hardass RM, and (b) a certain amount of faith
by the developers that we really can get our act together about short,
regular, predictable releases.

- Ted




Re: making developer location from ldap public?

2005-08-25 Thread Theodore Ts&#x27;o
On Thu, Aug 25, 2005 at 05:11:01PM +0200, Robert Lemmen wrote:
> 
> i fully agree that generally an opt-in system is better, but in this
> case it is far more complicated to implement, and it's not really
> anything big that we are talking about here. if you want to hide where
> you are living from the public, you'll have a lot bigger problems than
> this entry that you can edit yourself. 
> 
> so the question should be more like "do you really have a problem if
> this field would be public"...

Some people are not comfortable with having that kind of information
easily available on the Internet.  The default must be opt-in, or not
at all.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Policy or best practices for debug packages?

2008-07-07 Thread Theodore Ts&#x27;o

There doesn't seem to be anything in policy about debug packages, are
there any wiki pages or best practices documents about what are the best
ways to create debug packages?

Some of the questions I have are:

*) I assume that the priority of -dbg packages is extra

*) What section should -dbg packages be placed into?  Should it be the
   section that the parent package is in, or something like "devel"?

*) Do we dump everything into /usr/lib/debug, i.e.,
   /usr/lib/debug/sbin/e2fsck?   Or should we put it in
   /usr/lib/debug/, i.e., /usr/lib/debug/e2fsprogs/sbin/e2fsck?
   Most packages I've seen seem to be doing the former.

*) Is it OK to include the -dbg information in a library's -dev package?
 Or should it be separated out?  Otherwise as more and more packages
 start creating -dbg packages, the number of packages we have in the
 debian archive could grow significantly.

*) Red Hat includes source files in their debuginfo files, which means
 that their support people can take a core file and get instant feedback
 as to the line in the source where the crash happened.  But that also
 means that their debuginfo packages are so huge they don't get included
 on any DVD's, but have to be downloaded from somebody's home directory
 at redhat.com.  (which appears not to be published, but which is very
 easy to google for.  :-)   What do we want to do?

There are probably more questions like this, but in case all of this has
been decided and I just missed it while google searching for the
relevant policy/best practice document, I'll stop now.  :-)

- Ted

   


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Request to check for /dev/.static/dev in /etc/blkid.tab

2008-08-04 Thread Theodore Ts&#x27;o
Hi all,

Apparently udev 0.125-3 is going to be in Lenny (it's not yet in
Lenny, but apparently the release-team will be giving an exemption to
let it in despite the freeze).  One of the changes in udev 0.125-3 is
that /dev/.static/dev is going to be going away.  (Rightly so, it's a
hack).  However, this interacts poorly with a bug in the blkid library
which will fail to get rid of stale /dev/.static/dev entries in
/etc/blkid.tab.  (See Bug#493216)

The fix is fairly simple, but I'm trying to get a sense of how
many Debian users have this problem and will get bitten when they
upgrade to Lenny.  So, if folks could type the following command into a
terminal window: "grep /dev/.static /etc/blkid.tab", and if you see any
output, could you drop me a quick e-mail with the results of the grep
command?

If you do find any output, I'd appreciate knowing when/how your
system was installed, and if you may have ever explicitly typed a
command such as "blkid /dev/.static/dev/sda1".  I can't see a situation
where a /dev/.static/dev entry would get into the blkid.tab file,
except by explicit user action, but the submitter of bug #493216 claims
he's never done this.

The workaround to this problem is fairly simple: "rm 
/etc/blkid.tab" or "blkid -g" as root will do it; but if it turns out
there are large numbers of users suffering from this problem, I'd like
to so I can petition the release-team for my own freeze window exception
to get in a very simple patch to fix this bug before Lenny ships.

Thanks, regards,

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Please ignore mail bounces for [EMAIL PROTECTED]

2007-12-27 Thread Theodore Ts&#x27;o
Hi all, some of you may have noticed have received mail bounces from
mail that you sent me from [EMAIL PROTECTED] that referenced failure
to deliver mail to [EMAIL PROTECTED], for example:

  <[EMAIL PROTECTED]>... Deferred: Connection timed out with thunk.org.
  Message could not be delivered for 3 days
  Message will be deleted from queue

Some of you may have also noticed that mail sent to [EMAIL PROTECTED] may
have also started bouncing in a similar fashion.  

My apologies.  Unfortunately thunk.org failed on December 23rd, and
because I and other people who might have been able to gain physical
access to the colo facility (where the server is located) have been
travelling for the holidays, I haven't been able to get the machine back
up as quickly as I would like.  Please rest assured that mail sent to
[EMAIL PROTECTED] is being read, despite the bounce messages.
[EMAIL PROTECTED] was a backup copy of mail sent to [EMAIL PROTECTED] that
I established a while back as a backup way of reading my mit.edu mail
after my mit.edu IMAP server was down for 4 days.[1]

In any case, mail service to [EMAIL PROTECTED] has not been interrupted; I
am reading mail sent there, and it is the primary way that I should be
contacted for any Linux/Open Source related matters.

Mail to [EMAIL PROTECTED] is currently being held in mail queues, and
depending on how SMTP mailers have been configured, may have started
getting returned to sender.  A few debian mailing lists are set to feed
[EMAIL PROTECTED], and some people use it to contact me, although I
generally have not published it as a contact address.

I'm hopeful that I will get thunk.org restored fairly quickly, and
apologize for any inconvenience/confusion this may have prompted.

Regards,

- Ted

[1] An interesting story, which involved the MIT I/T staff waiting 36+
hours for a Solaris ufs fsck to finish on a 1TB filesystem, before
aborting it in frustration and using other methods to restore service,
but for another day.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



A request for those attending key signing parties

2011-01-31 Thread Theodore Ts&#x27;o
At the most recent Linux.conf.au pgp keysigning, I noticed a number of
Debian developers present.  Like me, they had new keys that they offered
up for signing, presumably so they could start replacing their 1024DSA
keys with stronger keys.

If you are signing keys where you've verified the identity of fellow
Debian developers at a key signing party, please do us all a favor and
don't just sign it with your brand-new key --- but *also* sign the DD's
key with whatever key you you currently have currently in the Debian
keyring.

Otherwise, you could end up with a situation where a whole group of DD's
have each other's keys certified, but only signed with their new keys
--- which isn't useful when they are submitting their keys to the Debian
keyring maintainer for inclusion.

What I did was I signed the keys that I verified with *both* my new key
and the key I currently have in the Debian keyring.  However, to date,
although I've received key signatures from multiple people whom I know
to be Debian developers, my new key is only signed by one key which is
currently in the debian keyring.  (Thanks to Brendan O'Dea!)  At the
moment my new 4096 bit RSA key is waiting until I get more signatures,
or some of the new DDs' keys that have signed my key get accepted into
the Debian keyring.

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/E1PjyoY-0003xV-OQ@tytso-glaptop



Re: Package libc6-dev depends on linux-kernel-headers

2003-11-03 Thread Theodore Ts&#x27;o
On Mon, Nov 03, 2003 at 01:20:49PM -0500, Daniel Jacobowitz wrote:
> > I don't know whether this package needs to match the kernel version or
> > not, but if not I think the name is poorly chosen.
> 
> It does not need to.  Feel free to propose a patch to document this
> more clearly (I don't really want to rename it again...)

Not normally, but installing 2.5/2.6 based header files can break
attempts to build 2.4 modules --- openafs-modules-source broke as a
direct result of installing linux-kernel-headers 2.5.99-test7-bk-6.

- Ted




Re: comerr-dev (>= 2.0-1.33-2)

2003-11-03 Thread Theodore Ts&#x27;o
On Mon, Nov 03, 2003 at 08:23:42PM +0100, Turbo Fredriksson wrote:
> I need to upgrade my semi-woody system. I don't want to do a
> dist-upgrade (only upgrade MIT Kerberos V). The 1.3-2 version
> needs comerr-dev (>= 2.0-1.33-2) and I have 2.0-1.27-2.
> 
> Jumping to 1.34+1.35-WIP-2003.08.21-3 seems to be to big a step...
> 
> Where can I find '= 2.0-1.33-2' (or something around that number)?
> It used to be an 'attic' (morgue I think it is called) on ftp-master.
> This however only have files roughly two months back...

Actually, you probably want to use the latest comerr-dev and comerr
versions, since there are some compatibility fixes to make sure it
works correctly with debian packages that utilize the Hiemdall
kerberos libraries.

com-err is pretty safe and low-risk, so there should be no problem
using the latest version in unstable.

- Ted




Re: Bug#219582: ITP: linux -- Linux 2.4 kernel

2003-11-08 Thread Theodore Ts&#x27;o
On Fri, Nov 07, 2003 at 11:00:23PM +0100, Andreas Metzler wrote:
> >> No, the fix is a fucking huge amount of work, which is why nobody has
> >> done it before, even for the upstream kernel.
> 
> > Appliing patches dinamicaly and conditionaly is a huge amount of work?
> 
> No, choosing, writing and testing the patches on the respective
> machines is.

Consider how long it takes for us to get XFree86 4.3.0 out, when most
of the other distro's are already shipping with 4.3.0, and the CVS
tree generally works just fine for i386.  I'm told one of the reasons
for this is because XFree86 attempts to support all architectures.  

My concern with trying to have a single kernel that supports all
architectures it that it will be so hard to upgrade to a newer version
that it will become as stale and obsolete as XFree86, even in sid, is
today.  (Yes I know about XFree86 4.3.0 in experimental; but the fact
that we needed to do is precisely my point.  

XFree86 4.3.0 was released February 27, 2003.  Almost 9 months later,
it still hasn't hit sid.  Do you want to be shipping a kernel which is
9-12 months out of date when the stable release is finally cut, such
that it is some 2-3 years out of date before it is finally replaced
with the next Debian stable release?

If not, think carefully about whether the "one package for all
architecture" really makes sense!  I don't think it does, and I think
folks are massively understestimating the amount if work it takes to
support something like the kernel or an X server on all Debian
architectures.

- Ted




Re: Bug#213450: bug #213450

2003-11-17 Thread Theodore Ts&#x27;o
On Mon, Nov 17, 2003 at 03:14:49PM +1100, Brian May wrote:
> Hello,
> 
> There is a bug (actually a number of bugs now) against heimdal
> that causes it to segfault under certain conditions.
> 
> The bug has been reassigned to libcomerr2.
> 
> It also has a simple one word patch.
> 
> However, I have not got any response from the debian maintainer.

Sorry, for some reason this is the first e-mail message I've received
on this bug.  I'm not sure why the earlier messages (when it was
reassigned, and the other messages filed after that point on this bug
didn't actually get to me).

Looking at the code, I believe it's probably better to fix the code in
com_right.c (which is also in libcomerr2) than to change the type of
n_msgs, but let me do some more detailed analysis.  I'll get back to
you fairly quickly with an answer.

Again, my apologies for the delay.  I'm not sure why I didn't get the
earlier messages, including the one which was directly addressed to me
on October 17th, but for some reason it completely disappeared.  The
only thing I can think of is that for some reason the mit.edu spam
filters ate your message for some reason.  Very curious.  

- Ted




Re: UserLinux white paper

2003-12-02 Thread Theodore Ts&#x27;o
> On Tue, Dec 02, 2003 at 12:04:31PM +, bruce wrote:
> > I did a first pass at the UserLinux white paper, it's at
> > http://userlinux.org/white_paper.html. I think I'll sleep for a while.

This is an interesting white paper, but I think it's missing something
rather important in its discussion of the business model.  And I say
this as I currently sit at a customer site in Atlanta working a
critical situation, while wearing a suit (but not a tie, so the flow
of blood to the brain has not been impeded :-).

The important thing to remember here is that customers don't buy
operating systems, they purchase solutions.  And at the moment, many
of the solutions require the use of proprietary third party
applications: applications like SAP, or Oracle Financies, or Ariba.

The next logical question then is why will an ISV support a particular
distribution or OS provider?  The answer in practice is that they will
only support an OS/Distribution when they are reasonably certain that
when they need help fixing some problem, they can get help from the
distribution.  Very often, in these situations, the ISV doesn't
necessarily pay money to the OS/Distribution provider.  In some cases,
where the ISV is highly desired by the customers, the OS/Distribution
provider actually has to **pay** **money** to the ISV, and establish a
competency center in Waldorf, Germany staffed with some number of
engineers before said ISV will actually deign to port their
application to that particular OS and support that particular OS.
These sorts of situations really do happen!  

Even in situations where the ISV is so highly desired that it would be
a severe competitive disadvantage for a particular OS vendor of that
particular enterprise resource planning application was not available
on that OS, in many cases the ISV's can at the very minimum require
that the OS vendor to provide free support.  

I recently visited one ISV where at their height of popularity, IBM
had a team of three or four people devoted towards keeping that ISV
happy, and this was necessary in order assure that the ISV would
continue to support AIX.  This particular ISV drove enough business in
hardware, software, and professional services sales to IBM that it was
worth IBM's while to devote a team of people for that particular ISV
(and this ISV was not even one of the most highly strategic ISV's ---
some ISV's might have an order of magnitude more people!).  

If some vendor such as Sequent had chosen not to devote that kind of
support to that particular ISV, that particular vendor might have
chosen not support PTX, and then Sequent would get locked out of
certain customers that might have chosen to use this particular
financial application.  (I use Sequent here only because I didn't want
to use the name of a currently active company; but the example applies
just as equally to SGI/Irix or HP/HPUX or Sun/Solaris.)

So the problem then with the UserLinux distribution concept is how do
you fund required investments which are necessary for that particular
distribution to succeed?  $1 million USD might pay for the necessary
engineering costs, but it will not pay for the ISV engagement
resources necessary to provide free hand-holding support to ISV's that
are used to getting that kind of support, and who are used to
companies coming to them on bended knee in order to convince that ISV
to port their application to Soliars, to AIX, to HPUX.  But if one of
the goals is to get an endorsement from application vendors, UserLinux
will have to provide a comperable level of support as what Sun might
give that particular ISV in order to support Solaris, for example.

However, if you have multiple competing "body shops" that are making a
small-but-manageable amount of profit to provide end-user customer
support, how do you fund the freebie support to the ISV's?  (And even
worse, what about the some of the more strategic, more desireable
ISV's that in some cases require free hardware or even seven figure
cash payments before they will entertain supporting UserLinux?)

It's an interesting problem but understanding some of these
constraints might allow folks to understand why the commercial Linux
distributions charge so much for their enterprise Linux products.

- Ted




Re: UserLinux white paper

2003-12-02 Thread Theodore Ts&#x27;o
On Tue, Dec 02, 2003 at 04:52:47PM -0800, Bruce Perens wrote:
> I don't deny that many businesses do have to come to their vendor on 
> bended knee to get support for a new platform. It's important, however, 
> to realize that this does indicate a problem in the customer's 
> relationship with the vendor. Either there's only one solution, or the 
> customer has allowed himself to enter a lock-in situation. The latter is 
> much more likely.

Most end-customers don't bother going to their vendor on bended knee
to get support for a new platform.  That assumes that most customers
want to run machines with a particular OS, and that's simply not true.
Customers do not purchase operating systems/distributions; they
purchase solutions.

So instead, businesses deside that they want to run SAP, or Oracle
Financials, or Ariba, or Peoplesoft, and then they decide which
hardware and OS they want to use that will support their desired
application of choice.  This is why traditionally computer vendors
have to go to ISV's on bended knee.  Once a customer has decided to
adopt Peoplesoft, or Ariba, if Debian or UserLinux or SuSE is not
supported, then those hardware/software/distribution platforms that do
not support the chosen business application will simply be out of
luck.

> Group 1 is a large and complicated industry. They are major customers 
> for a number of proprietary application providers. Their business is 
> complicated enough that it is not possible for them to purchase a 
> solution, they must integrate it under the direction of their IS 
> department, using both internal and external resources. They have the 
> economic power to compel their application providers to support the 
> platform of their choice, it is the application provider who must come 
> to them upon bended knee.

Why does Group 1 really care about running under Linux, as opposed to
some other OS?  Is it really about price sensitivity?  If so, it's
surprising because to the extent that they pay $50,000 for Oracle, or
$1,000,000+ for SAP R/3, why should they care about the cost of $1500
for the RedHat or SuSE enterprise version of the distro?  

> Group 2 are ISPs. They do not in general ask for much added value over 
> the Open Source contents of the system, and they are generally 
> self-supporting. They are more interested in quality and cost than ISV 
> support.

To the extent that they are self-supporting, they become economically
irrelevant to a commerical distribution or to a support provider of
UserLinux.  The best that you will get out of these customers are bug
reports, and maybe you can get some of them to become Debian
Developers and work on Debian packages on company time.  So why don't
they just use Debian instead?

I will also note that ISP's are generally not generally regarded as
"enterprise" customers.  So perhaps you are using a somewhat different
definition of "enterprise" than what is traditionally used.

> So, our problem is how to rebalance the vendor-customer relationship for 
> our purposes. Probably the most useful tool is the industry group 
> organization, where a number of similar businesses get together to steer 
> their participation in userlinux, and the group involves their vendors 
> from a position of strength, together, rather than one of weakness, 
> apart. Customer group 1 is confident that this will work for them.

Business who get together can also negotiate better discounts from
today's distribution vendors.  It's already the case that very few
people actually pay list price for commercial distributions

- Ted




USELINUX CFP

2003-12-05 Thread Theodore Ts&#x27;o

Perhaps some folks would be interested in talking about Debian at
USELINUX in Boston next year?  If someone in the Boston area would be
willing to help organize a Debian BOF and other activities, that would
be great.  

I'm going to be fairly busy during the Usenix/USELINUX myself, between
being a conference chair and teaching a tutorial, etc., so having
someone who could organizing getting people together to staff a table at
the conference, perhaps, and organize a keysigning would be great.  Give
me a call if you'd be interested in helping to organize some Debian
events; I can coordinate with the Usenix folks to make sure we can have
space.

Also, if some of the folks who are interested in a Custom Debian
Distribution or a Debian "flavour", or some other presentations about
using Debian in some interesting fashion, that might make for a very
presentation.  If you have any quesitons about how to make a submission
to USELINUX, please feel free to drop me a line.  Thanks!!

- Ted


Call for Speakers/Proposals: USELINUX

USELINUX will be one (or possibly two) day special interest track
hosted as part of the 2004 USENIX Annual Technical Conference in
Boston (June 27 through July 2, 2004).  The focus of USELINUX, as the
name implies, will be on showcasing ways in which creative members of
the Linux community are making use of Linux --- on the desktop, in
embedded applications, in corporate data centers, in retail
environments --- the possibilities are endless!

We are soliciting proposals for presentations covering any of the
following topics:

* Linux advocacy --- quantifying Linux TCO/ROI advantages
* Linux adoption in lesser developed countries
* Development tools and GUI libraries for embedded Linux
* Technologies which enable the use of Linux 
- desktop technologies
- file format conversions
- integration with with legacy Unix and Windows
operating systems
* Case studies
- Using Linux in Business
- Commercial uses of free software
- Using Linux in network infrastructure and telephony
- Cluster applications using Linux
- Embedded Linux applications
- Business models involving open source
* Organization issues: Linux Users Groups
* Next generation Linux security models
* Managing large-scale and high-availability Linux installations

Proposals for more informal, "Bird of a Feather" sessions are also
welcome.


Submission guidelines:

In order to judge a proposal, we are requesting a submission of a 2-3
page extended abstract of the proposed presentation.  These abstracts
must be submitted by December 16, 2003.  While formal papers are
requested, they are not required; in leiu of papers, camera-ready
copies of the slides to be used in the presentation may be submitted
by May 4, 2004.  Formatting guidelines for papers may be found at:
http://www.usenix.org/events/usenix04/freenixsubmit.html.

In order to submit a presentation proposal, please go the the
following web page:

http://www.usenix.org/events/usenix04/uselinux.html


IMPORTANT DATES:

* Proposals due: December 16, 2003, 11:59 p.m. EST
* Notification to authors:   February 4, 2004
* Camera-ready papers due:   May 4, 2004


USELINUX Program Committee

Jerry Feldman
Jim Gleason
Bdale Garbee
Jon "Maddog" Hall
Don Marti
Stacey Quandt
Theodore Ts'o
Victor Yodaiken




How frequently is the override file getting updated?

2003-12-12 Thread Theodore Ts&#x27;o
Hi,

I updated a new set of e2fsprogs debs 5 or 6 days ago, which among other
things contained an RC bugfix.  It's been stalled waiting for someone to
update the override file.  I know that people are busy finishing off the
last bits of recovery from the security compromise, but when should I
expect that the override file will get updated so the updated e2fsprogs
can get accepted into the archive?

Many thanks!!

- Ted




Re: plagiarism of reiserfs by Debian

2003-04-21 Thread Theodore Ts&#x27;o
On Mon, Apr 21, 2003 at 11:14:05AM +0100, Matt Ryan wrote:
> It's also worth considering that perhaps there is a language difference
> (does Hans have English as a first language?) that make it seem that the
> email seem harsher than it really is. Many Europeans are naturally very
> honest with what they say and at first this comes across as been rude/blunt
> etc (especially to people who rarely consider the world outside there own
> borders).

Nice try, but Hans is an ugly American, I'm afraid.  (Born and bred in
California?)  Went to grad school in Berkeley, but left because no one
listened to his ideas.

- Ted

P.S.  This is the first I've heard about GPLv3 having the equivalent
of an advertising clause (in fact, it's worse than an advertising
clause, since at least the BSD advertising clause was only in
documentation, not in the program startup messages).  Given the FSF's past
position on the BSD advertising clause, it seems... surprising... to
me that RMS would put such a clause in GPLv3.  I am eagerly awaiting
to hear RMS's views on the subject.








Re: Bug#189370: acknowledged by developer (irrelevant)

2003-04-21 Thread Theodore Ts&#x27;o
This issue has degenerated to name calling at this point, and in other
threads, Godwin's law has even been invoked, perhaps not to great
effect.

I agree with you Manoj, as I suspect most people who have commented on
this list, but perhaps this is time to refer the issue to the
Technical Committee, and get them to issue a ruling on this question
one way or another?

- Ted




Re: Jumped up developers [Re: stop the "manage with debconf" madness]

2003-04-21 Thread Theodore Ts&#x27;o
On Mon, Apr 21, 2003 at 06:33:49PM +0100, Matt Ryan wrote:
> And the bit that the "jumped up developers" don't seem to understand is the
> co-operation and consensus. I constantly see comments on how we should
> restrict the number of maintainers, how we need to make sure everyone's
> packages measures up to some indication of worth and importance and how if
> you have not got stuck in with some technical solution in the dim and
> distant past then your opinion isn't worth jack. My vision of inclusiveness
> means that everyone gets a say whether its liked it or not.

People can say whatever they want.  They can say that 2+2=5.  That
doesn't make it be technically correct.

> There are no ranks in Debian, no one gets paid (AFAIK) and so no view is
> more or less valid than another. 

Absolutely not.  For issues involving questions of fact, some views
are correct, and views are incorrect.  For other issues, we *must* all
agree to do things a certain way, or the project loses all coherence.
That's what policy is all about.  If your view goes against what
Policy dictates, you can argue that Policy should be changed, but to
say that "your view is just as important as any other" is not a valid
argument which justifies violating Policy.  

And ultimately, your assertion is fundamentally incorrect because we
can ultimately appeal a question to the technical committee, and once
they have ruled, then one particular view *is* valid, and another
particular *is* invalid.

> I think a small minority of developers can
> easily get identified as pushing their own agendas if we did an informal
> poll on this list. Those are the one's I have issue with and will continue
> to say so. Most likely a strong feeling to respond to this message will
> promote you to the top of the list 8-)

Ad hominmem.  If you think they are pushing their own agenda, then
identify it.  What I've seen so far on this thread is an honest desire
to improve the quality of the Debian distribution.  Consistency
between packages and avoidance of using debconf to either (a) display
silly and inane messages about binutils, or (b) as a way to blow away
user managed configuration files are both things which we should
strive for towards improving the quality of the overall Debian
distribution.  As such, those are agendas for the public good, and not
what I would call private agendas.

- Ted




Re: Time to package simpleinit?

2003-04-27 Thread Theodore Ts&#x27;o
One big problem about Richard Gooch's simpleinit is that it is
functionally very different from the standard systme V init scripts.
Specifically, he always assumes that runlevel n+1 is always a superset
of runlevel n, and that in order to get to runlevel n+1, you must
first start up all of the services at runlevel n.  

Runlevel 6 has been used for "reboot" since time immemorial, and in
fact is documented in Debian Policy as such.  Simpleinit can't support
this.

>  * The /etc/init.d/ scripts would need to add "need otherscript" (and
> sometimes "provide something"). As I think it is a very bad idea to edit
> these scripts in our post-install (and try to reedit them in
> pre-remove)) one would have to file bugs agains packages with
> /etc/init.d scripts. Will that be sucessfull? How cooperative will the
> maintainers of these script be?

And just adding "need otherscript" and "provide otherscript" will
break compatibility on systems that don't use simpleinit, unless the
system V initscript package is enhanced to provide no-op functions
which provide "need" and "provide".  

>  * Is there even interest in simpleinit by others than me? I would also
> need someone to ask if I have problems with sysvinit or similar, and I
> would like to know who thinks he is capable of helping me? Are there
> people that might help me when it comes to file bug against packages
> with /etc/init.d scripts?

Simpleinit is unfortunately completely incompatible with System V
init.  So at the very least, Debian Policy would have to be amended to
support simpleinit, and I'm not really convinced it's really worth it
for Debian to support two fundamentally different init script systems.

Not only are the init scripts different, but the interface which is
exported to the system administrator, and what can and can't be
implemented using simpleinit, is completely different.  

For this reason, I consider simpleinit to be a failure and a mistake.
With a little bit more work, for example, the traditional system V
runlevels could be implemented, and the dependencies could have been
implemented in a structured comment block, for full backwards
compatibility.  I've been told that SuSE's init scripts system does
this, while also providing full automatic dynamic dependency
management, ala simpleinit.  I haven't had a chance to look at it, but
everything I've heard about it makes it sound far better than
simpleinit.

- Ted




Re: Time to package simpleinit?

2003-04-28 Thread Theodore Ts&#x27;o
On Mon, Apr 28, 2003 at 03:43:25PM +0200, Joachim Breitner wrote:
> 
> A runlevel is just any script whose name makes it being called by
> /sbin/init on a certain runlevel, like
>   /etc/init.d/runlevel.3
> There is nothing special about this script, it could do anything you
> want. Usually I think it will only print out messages like "Dude, this
> is runlevel 3" and  all the packages that should be started in
> runlevel 3. It may have the line "need runlevel.2" at the beginning, but
> thats totally optional. You could also make runlevel 3 build upon
> runlevel 4, or none at all, or even make it dependant on something (eg
> runlevel 5 starts either runlevel 2 or runlevel 3, depending on whether
> it is started on weekends or on regular days).

The problem with simpleinit is that it only really worries about how
to start up stuff, and the only mechanism it has for shutting down
scripts is by using the rollback scheme, which assumes that things
should be shutdown in the reverse order that they were started.

An inherent assumption of this scheme is that you need to know when to
do a rollback and when to execute a need.  For example, suppose you
want to go to run level 3.  Should you execute /etc/init.d/runlevel.3?
Well, if you're at run level 2, then presumably, yes.  But suppose
you're at run level 4?  In the Richard Gooch view of the universe, run
level 4 always has more stuff started than run level 3, so the right
answer is that you tell init to rollback to run level 3.  (It also
assumes that you went through run level 3 on the way to run level 4
--- otherwise, simple init will start shutting down processes until it
finds run level 3 or it runs out of startup scripts.  So if you didn't
stop at run level 3 before going on to run level 4, the rollback
command will happily kill off all of the daemons in your ssytem before
running out of startup scripts.  Oops.)

Basically, the fairest thing to say about simpleinit is that it
doesn't have the same functionality as run levels, and it gives up
some of the generality of run levels in the name of simplicity.  But
as a result, its emulation of run levels is at best imperfect, and
there are definitely things that you can do with run levels that you
can't do with simpleinit.

Specifically, with run levels you can specify a different set of
scripts to start *and* stop, whenever you enter a particular run
level, and you can enter run levels in any arbitrary order.  So you
can go from run level 2, to run level 3, to run level 5, to run level
2. and then back to run level 3.  (For example.)  You can't do this in
simpleinit, because the only way to stop daemons is via the rollback
mechanism. 

Does this matter?  It depends on what system administrators have
utilized run levels for.  Historically Debian has given a lot of
leeway to system administrators to use run levels however they wish.
So it is certianly not the case that simpleinit could never be a
replacement for sysvinit.  That might be considered by some to be a
feature, but I for one was left wishing that Richard had spent a bit
more time worrying about backwards compability, so that simpleinit's
abilitys could be a superset of sysvinit, and not a subset.

> Dependecy informations in script comments are a bad idea:
>  * You are limited in type of script (shell, perl, C, vbs) you want to
> use.

I'm not sure it would *ever* be a good idea to use a C executable in
an /etc/init.d startup.  And in fact, historically most people have
considered it wise to simply use /bin/sh for init scripts for maximal
portability.

>  * You can't use dependency conditinal, like in:
>   if any remote file systems are to be used via an interface that is a
> ppp device, than depend on pppd unless the file system is marked noauto
> or the target host is already pingable for some strange reason
>Ok, the example is bad, but having laptops and other mobile devices
> in mind, that have to be very flexible in a lot of ways, this is an
> advantage.

Be very careful here.  A real danger with dependencies is that you get
yourself into a circular dependency which just wedges the entire
system up at boot time.  Merely *adding* a pacakge can get you into
trouble where the system will no longer boot, but simply looking up a
deadlock of unsatisfiable need commands.  Actually, what simpleinit
will do is simply ignore the need command the second time it is
called.  Which means that if there is a circular dependency, a
(randomly chosen, depending what script wins the race) dependency is
simply ignored.  

Ignoring a dependency is probably bettter than a deadlock which makes
the system unusable, to be sure, but it means that it is essentially
impossible (equivalent to the halting problem) to do any kind of
static *or* dynamic analysis to determine whether or not adding a
particular script to the system is safe or not.

- Ted




Re: /run/, resolvconf and read-only root

2003-04-28 Thread Theodore Ts&#x27;o
On Mon, Apr 28, 2003 at 06:09:10PM -0400, Sam Hartman wrote:
> OK, I think my worst fears are realized.  You do actually want to
> solve all the goals I could have imagined you possibly wanting to try
> try and solve.
> 
> I think I am very likely to wait until there is a policy change or at
> least text that would be good guidelines as a policy change before
> implementing.  The thread seems rather long, and my initial concern is
> that this seems somewhat under thought through.

... and those who have tried to explain why it's a bad idea or have
concerns have been brushed off.  So I've given up for now trying to
explain to you folks why I'm not convinced, since I don't have time to
go pig mud-wrestling, but please don't assume that silence means
assent.

- Ted




Re: The Debian Mentors Project

2003-05-13 Thread Theodore Ts&#x27;o
On Mon, May 12, 2003 at 11:40:04PM -0400, Joe Nahmias wrote:
> 
> If I may make a suggestion, a user should only be able to upload a
> package that either:
> 
> a) doesn't appear in the repository
> 
> - -or-
> 
> b) already has the uploader as maintainer
> 
> - -or-
> 
> c) has a RFA/O bug filed in WNPP
> 

What I would suggest is similar.  If the package has never been
uploaded to the mentors repository, require that it be
moderated/approved by a DD.  After that, a package in the mentor
system can only be superceded by the packages "maintainer" (i.e., the
non-DD who is "responaible" for the package) or a DD.

What, you say people shouldn't trust the binaries for any purpose?  If
so, why have them there in the first place?  Just let it be a place
where the apt deb-src repository can be made available

- Ted




Re: Do not touch l10n files (was Re: DDTP issue)

2003-05-14 Thread Theodore Ts&#x27;o
On Wed, May 14, 2003 at 12:07:29PM +0200, Martin Quinson wrote:
>
> Your engagement for the quality of your package is really great. Only, I
> think that you are not responsible of the translation. I know that there is
> a lack in debian framework concerning this point, but it really should be so
> ('cause maintainer cannot be responsible for translations they do not
> understand. How do you handle tranlations in russian, japaneese and
> bokmal?).

This is a fundamental question for which there definitely isn't
consensus, and it is a fundamental polity (governance) issue.

One is that the linguistic teams have full and ultimate responsibility
over the translations, and there is no recourse or appeal if the
maintainer doesn't like what they have done.   

Another position is that the maintainer is ultimately responsible; he
or she may delegate responsibility to helpers, just as the Debian
Leader may delegate certain responsibilities to subordinates.
However, it is clear that the maintainer or the Debian Leader is
ultimately responsible, even if the wise maintainer and/or Debian
Leader may not choose to exercise his or her perogatives very often.

This point is a subtle one.  I will point out that in a corporate
setting, it's quite normal for the employer's manager and or his
manager's manager will not fully understand all of the work that that
the employee does.  Yet they are still responsible for the work of the
employee, and if they don't like it, they can tell the employee to do
things a different way, or in the extreme case, they can fire him.

Obviously, if the manager doesn't completely understand what the
employee is doing, there will be a certain negotiation, and a certain
back and forth over goals and directions and what is and isn't
technically possible, etc.  Hopefully, said negotiations will be done
in a mutally respectful and civil manner.  But that doesn't change the
fact that ultimately the manager gets to have the final say.

Which model people subscribe to makes a lot of difference in how they
communicate.  For example, if your manager doesn't like the work that you
do, even if you think his grounds for objecting may not be the best
ones, would you tell him, "tough luck"?   Probably not

- Ted

P.S.  To the extent that the DDTP gives the package maintainer veto
rights, it seems pretty clear that at least initially the DDTP
believed that the package maintainer was ultimately responsible.
Given comments and the tenor of the tone made by some of the people on
some of the language teams, it's not clear they believe that as
strongly today.




A strawman proposal: "testing-x86" (Was: security in testing)

2003-05-14 Thread Theodore Ts&#x27;o
On Wed, May 14, 2003 at 02:22:05PM +0300, Chris Leishman wrote:
> I care about security in testing, and I believe others do too.  But I 
> don't think the process should be the same as with stable releases.  
> Testing should not become another psudo stable distributionit's for 
> testing.  So I don't think security management needs to be anywhere 
> near as comprehensive.
> 
> *shrug* But maybe I'm wrong and it's just me who likes to run testing 
> (to help out with 'testing' the distribution) but doesn't really like 
> the idea of having to deal with known remote security problems.  Maybe 
> nobody else cares and I should just shut up ;-)

Well, ideally, people could test packages in unstable.  When I was
first started experimenting with Debian, 4 years ago, I used to use
unstable.  But there was a time when I got bitten with a succession of
bad uploads that trashed my system one way or another --- a buggy perl
or lilo upload, that was not easy to repair.  So that's why I jumped
to testing --- because life was too short to have to stich my system
together after a careless maintainer uploaded something that obviously
couldn't have been tested before the upload, and that was happening
too often.

But after a while, I started noticing that way too many packages were
never entering testing.  In many cases, it was because they were
stalled because of a platform-specific bug on another architecture, or
because the package depended on a specific version of a very
complicated package (such as glibc), and the complicated package was
stalled for one or more reasons.  If a large number of packages aren't
entering testing, then obviously those packages can't get the benefit
of wider testing by the people who use testing.

In addition, some of my machines aren't necessarily behind firewires
some or all of the time.  A case in point is my laptop, where I
actually do all of my mail reading and most of my developing these
days.  (With some help from distcc :-).  Not being able to get
security updates is a real problem those classes of machines where the
administrator is willing to be run something bleeding edge, but for
which security fixes are hung up because of RC bugs in the package or
in some package dependency.

I've solved the problem for myself by just simply biting the bullet
and using unstable.  I either have gotten lucky, or maintainers of
core packages have gotten much more careful about testing their
packages before uploading, so I haven't gotten screwed by updates as
often as I did before.  If that's the case, then maybe the testing
distribution has outlived its usefulness.

But if people feel otherwise, then it would make sense to think of
ways in which testing might be able to be more true to its original
goals --- which is to expand the number of people who can test out
packages before a stable release.  If that's the case, then for a
giving platform:

* it's silly to not let a package be tested by a greater
number of people if it's being held up due to a
failure on an unrelated architecture.

* it's silly to not let a package be tested by a greater
number of people while we noodle over the question of
whether the GNU FDL meets the DFSG, or whether the
documentation needs to be eventually split out and put
into non-free.

So let me make the following modest strawman proposal.  Let us posit
the existence of a new distribution, which for now I'll name
"testing-x86".  Let's set aside the question of whether there should
be "testing-arch" for every single architecture, and whether this
should supplant the existing "testing" distribution.  Is the following
feasiable operationally, from an implementation point of view, from a
space on ftp servers point of view, etc?

Let us assume that there is one or more human(s) in the loop, which
server as the the master of the "testing-x86" distribution.  Once the
required time period has been met based on the priority of the upload,
if there are no RC bugs, then the package enters testing-x86
immediately.  If there *are* RC bugs, notification of this fact plus a
list of the RC bugs are sent to the testing-x86 master(s).  If the RC
bugs are "not really critical for testing", then the human(s) in the
loop can make a manual decision to allow that package to enter
testing-x86.

Since the testing-x86 distribution is, as the name suggests, specific
only for the x86 architecture, RC bugs which are specific to other
platforms needn't prevent the package from entering testing-x86.  In
practice, does this happen?  Well, let's take a look at just one
package as an example, glibc.  Glibc currently has 4 RC bugs.  But
none of them would or should prevent the glibc in unstable from
this proposed testing-x86 distribution:

*)  One bug is specific to sparc64 (#18838)

*)  One bug is specific to m68k (#184048)

*) And two bugs are copyright issues; o

Re: Status of Sarge Release Issues (Updated for May)

2003-05-14 Thread Theodore Ts&#x27;o
On Sat, May 10, 2003 at 10:16:40PM -0400, Morgon Kanter wrote:
> This one time, at band camp, Marc Haber <[EMAIL PROTECTED]> wrote:
> > On Mon, 5 May 2003 17:17:20 -0500, Chris Cheney <[EMAIL PROTECTED]>
> > wrote:
> > >I have read that Linus is planning to have 2.6 released before July and
> > >have 2.7 open for commits by Kernel Summit time (July).
> > 
> 
> Can't remember where I heard it, but it was a reputable source. I 
> heard that 2.6 was for release on the (American) night of Halloween.

You're thinking of the feature freeze, which was October 31st
(Halloween) of last year (2002).  It went fairly well, actually, in
terms of discipline of not letting new features after the code freeze.
We beat up Linus pretty badly last year about how long the 2.4 freeze
cycle went.

There is an IRC discussion scheduled today to talk about the 2.5
shutdown / 2.6 release issues, and the current date which is being
floated amongst the kernel developers is June or July.  The Kernel
Summit will be held just before the Ottawa Linux Symposium in July,
and both Linus Torvalds and Andrew Morton (the annointed 2.6
maintainer) will be there.  There have been jokes that if necessary,
we will just get Linus really drunk and then rip it out of his hands
and give it to Andrew if Linus 2.6.0 hasn't been released by then.  :-)

Seriously, although the date is not guaranteed, I think it's very
likely that 2.6.0 will be out in Summer 2003.   

That being said, distro's may want to wait a while before they're
confident enough to ship a 2.6 kernel as their default kernel in their
distribution.  Current guesses are that we'll likely see commercial
distributions with 2.6 based releases as early as 2003 Q4, and more
likely, 2004 Q1 or Q2.

I'll note that with 774 RC bugs in the Debian as of this writing,
given the typical rate at which RC bugs seem to being opened as well
as closed out, it seems pretty likely that 2.6 will stablize long
before Sarge is ready to ship.  (This is not meant as a criticism;
however, we can all choose to take this as a challenge if we 
wish.  :-)

- Ted




Re: security in testing

2003-05-15 Thread Theodore Ts&#x27;o
On Wed, May 14, 2003 at 05:53:50PM -0400, Don Armstrong wrote:
> Manoj's answer, while witty, is closer to the mark than you may
> realize.
> 
> Debian will always be for whoever the people contributing to Debian
> are willing/want it to be for. No more, no less.

Um, when we all agreed to be Debian Developers, we agreed to the
following from the social contract:

* Our Priorities are Our Users and Free Software

We will be guided by the needs of our users and the free-software
community. We will place their interests first in our priorities. We
will support the needs of our users for operation in many different
kinds of computing environment.


So what does that mean?  If the we define "our users" as ourselves,
then the social contract reduces to "we will place our interests first
in our priorities", and that doesn't sound so good, does it?  :-)

If our users include those who want something that is less stale than
"stable", but where they don't want to deal with having to stich
together their system after an update to perl or lilo leaves their
system completely unusable, how do we meet their needs?  There are
certainly disagreements at the tactical level (we could solve this
problem by applying pressure to people to not upload broken packages
to unstable; we could solve the problem by fixing enough RC bugs that
packages flow into testing much more reliably and quickly; we could
solve the problem by recruiting people to upload into
"testing-security").  

But the first question before we discuss tactics is whether or not we
"should" do it.  Does the fact that we've accept the Social Contract
put any kind of moral claim on what we as an organization do?  If the
question to that question is yes, then individual developers will need
to search their souls and decide whether or not this means they are
feeling called to put in the time to fix an RC bug, or work to NMU or
otherwise clear a blocked, critical package, or contribute to security
or testing-security, or do something else to further the goal.

> I'd argue that the converse is more important. [Unless most developers
> do everything they do for purely altruistic reasons. I know I do what
> I do for selfish reasons first.]

That may be true, but the ideals articulated in the Social Contract
aspire to something a higher more than that.

- Ted




Re: security in testing

2003-05-15 Thread Theodore Ts&#x27;o
On Wed, May 14, 2003 at 05:37:51PM -0700, Keegan Quinn wrote:
> 
> Sure, every now and then a badly-broken package makes it in for a
> day or two.  This seems to be far less harmful than the massive
> headache that treating 'testing' as a usable release seems to be
> causing.

Something that would make unstable much more useful is if dpkg had a
reliable "undo" capability.  It's unpleasant when you update a
broken package, and large number of packages break, and you can't
necessarily find a copy of the older, non-broken version of the
package to re-install.  If you're not a developer, you don't have
access to archives, so your choice is to either go back to the stable
or testing version of the package, or try to find a mirror that still
has the n-1 release of the unstable package.

So simply making it easier for people to get a previous version of a
package when the current version in unstable is borked would probably
take away many of the reasons why people might want to use "testing"
instead of "unstable".

The harder disaster scenario to deal with is when after an update,
your system is so totally borked that recovering requires use of a
rescue disk, or other manual interventions.  As I mentioned, there was
a time some years ago, when I was first getting involved with Debian,
where broken perl uploads (which broke dpkg so it was painful to back
out of such a situation), and broken lilo uploads.  Both were screwing
up my system to such an extent that I was spending far more time than
I liked doing manual wizardry just to get my system back to a
recoverable state.

At the time, when I whined and complained, the response I was given
from my Debian mentors at time was to use the testing distribution
instead.  (I was also told, jokingly, that the all of the LILO
breakages was because the lilo maintainer was really a secret GRUB
supporter, and was breaking LILO just to get people to convert over to
GRUB, and that I should just get with the program and switch over to
GRUB.  :-)  

But now people are saying that using testing is a bad thing.  Part of
the problem is that different people have very different ideas about
exactly what testing is useful for.

>Hmm.  Funny how myself and every admin I know have only very minor issues with
>running unstable.  What, pray tell, makes it such an 'obvious' non-option for
>end users?  Well-timed unstable snapshots are often more 'stable' than
>commercial Linux releases, in my limited experience.

Clearly you didn't use unstable when I did several years ago, or
you're just remembering those days through rose-colored glasses.  :-)

But seriously, if the right answer is that people just shouldn't be
using testing, we should say that, in big letters.  And then perhaps
there ought to be tools that make it easier for someone to get their
system functioning again after an unstable package update leaves them
screwed over.  Ideally, that should never happen, but hey, people make
mistakes.  We just need a way to make sure that such mistakes are
recoverable.

- Ted




Re: A strawman proposal: "testing-x86" (Was: security in testing)

2003-05-17 Thread Theodore Ts&#x27;o
On Sat, May 17, 2003 at 11:41:02AM +0200, Eduard Bloch wrote:
> Not only you, Jerome and me were suggesting it in the past. However I am
> afraid that the whole package movement machinery would have to be
> rewritten to allow independent handling of the version in different
> "testing" threes, plus there would appear some problems with porting of
> the security fixes to all varios testing versions on different
> plattforms. And don't forget, we need lots of man power to sort the
> relevant RC bugs, and likely need something in the BTS to set
> architecture marking tags.

I was thinking about having someone manually override packages into
the hypothetical testing-x86 distribution, and applying common sense
(hence my glibc example), as opposed to needing new machinery in the
BTS to set architecture marking bugs.  I also explicitly called it
"testing-x86", as opposed to the the x86 architecture of testing,
precisely to avoid needing to rewrite the package movement machinery.
The thought process was to see how much of the existing machinery
could be reused without needing to modify them, as I'm a Lazy S.O.B.  :-)

As far as the extra man power is concerned, if there is someone to
support the "testing-x86" distribution, and no one can be found to
support the "testing-m68k" distribution  oh well, they'll just
have to settle for the traditional stable, testing, and unstable.


- Ted




Re: Do not touch l10n files

2003-05-19 Thread Theodore Ts&#x27;o
On Sun, May 18, 2003 at 06:55:37PM +0200, Marc Haber wrote:
> Highly technical packages like zebra, netfilter-related stuff and
> linux-atm are most likely to be used by people who know English. Not
> speaking English will make running routers and/or internet security
> systems almost impossible anyway.

I've done most of the work to internationalize e2fsprogs (at least as
far as gettext is concerned; I haven't done the framework to
internationalize man pages yet), and while it was done mostly for my
own edification, to learn about gettext, I have had some concerns
about whether or not Internationalization is actually a *good* thing.

The main problem here is support.  If uses e2fsck with NLS support
enabled, and with a non-US locale, the messages will come out in their
native language.  Which is all very well-and-good until they run into
problems and they start asking me for help.  If it's in some language
I don't speak, such as Swahili, I'm going to be very hard pressed to
actually help them.

E2fsprogs may be a special case in that why I get these cries for help
(which mainly are of the form, "help me, I'm a loser, I didn't make
backups, can you help me recover my 10 years of thesis research"),
time is of the essence.  So waiting for a translation team to
translate output back into English is not an option.

Furthermore, when you're dealing with a filesystem which may have been
modified by e2fsck during its initial run, the possibility of
resetting the locale back to C to defeat the translation may not help,
as the second e2fsck run may not have same messages as the first
e2fsck run.

I suppose that I could try to look at the Swahili's .po file, and try
to match the output and turn it back into English, but that will be
very, very tedious, and so I won't be able to help as many people when
they give me their sad stories of years of research being lost.

There are a couple of possible solutions:

1) Someone could write a program which takes output, and a .po file,
and tries to undo the translation.  This is a lot harder than it might
first appear, since the strings may have printf expansions (i.e., %d,
%x, and %s, with the last being particularly hard to deal with).

2) Use VMS or VM style message prefixes to make it easier for someone
who doesn't under-stand the internationalization to figure out what's
going on.  (i.e., "SYS-EXT2-YOURFUCKED-14326: Stupid summer intern who
shouldn't have been given root access ran mke2fs on half of a MD
device", where "SYS-EXT2-YOURFUCKED-14326" is the same regardless
of the translation, so it can be easily looked up).

3) Tell users to either not use the NLS support at all for e2fsprogs,
or resign themselves to a second-class citizen level of support,
simply because the developer can't provide free support in a language
he (unfortunately) doesn't understand.

Right now the default answer is #3, but that's not very satisfying.
Among other things, it calls into question whether or not the
internationalization of e2fsprogs was actually a good idea or not, or
just a complete waste of time.  As for the other possible solutions, I
don't have time to write #1, but if someone is looking for a good
summer project, I think it would be very useful.

- Ted




Re: Do not touch l10n files

2003-05-20 Thread Theodore Ts&#x27;o
On Mon, May 19, 2003 at 11:03:17AM -0500, Steve Langasek wrote:
> It seems to me this would be mitigated by two factors: 1) if they know
> enough to realize they should be emailing you in English, they probably
> realize they need to send the error messages in English too (by running
> e2fsprogs in English if possible, or providing an impromptu translation
> if not); 2) in single user mode, where I would expect most of the
> time-critical support requests to originate, it requires a significant
> amount of dedication to get a locale other than the C locale.

Sure, but both of these suggestions call into question whether or not
having translation teams translate those parts of e2fsprogs's .po file
which are for e2fsck are pointless or not.

> In practice, are you running into support requests where there is a
> language barrier because of l10n of the e2fsprogs?

Not yet, but to date there have been very few people who have actually
done .po files for e2fsprogs.  I have Turkish, German, Czech
translations, and that's really about it.

And yes, until someone starts agitating for /share/locale/... so that
the boot-time messages are translated, it's unlikely that the problem
of someone who needs to translate the e2fsck log file from Swahili
back to English will be a problem in actual real life.  (Which is a
good thing, because the translations of translations are generally
quite bad, and unlikely to be accurate enough that it will be easy to
figure out what the original English message was.)  But again, if
that's the case, it may be that internationlizing e2fsck was never a
good idea to begin with, or at the very least, a pointless exercise.

I dunno.  It's certainly a potentially heretical position, but it
really calls into question for whom are the e2fsck messages really
intended for.  Are they intended for the local user, who may not
understand English?  Are they intended for the system administrator
(and at least for today, it's pretty much laughable to assume that
someone could administer a Linux system without knowing English ---
although who knows, that might change some day)?  Or is it intended
for the people who try to provide free assistance to people who have
sob stories about hardware failures and unbacked-up data?  (Of course,
that model doesn't actually scale well.)

The traditional answer to this problem has been ugly message catalogs
id strings.  (i.e., the SYS-EXT2-ROOT_DIR_GONE-14356 prefixes).  But
they're ugly as all heck.  One potential solution is to cause the
printing of such message id's to be optional, but turned on by default
if NLS support is enabled and the language is non-English.  Figuring
out this will likely be an ugly hack, but perhaps that's the right
solution.

(Of course, this still doesn't answer the question of whether anyone
would ever want or use locale support to be enabled during the initial
boot sequence, such that the boot messages come up in the local
language)

- Ted




Re: use of RDRAND in $random_library

2014-06-13 Thread Theodore Ts&#x27;o
On Fri, Jun 13, 2014 at 10:09:02AM +0200, Martijn van Oosterhout wrote:
> > Excuse me if I'm blunt here, but I understand that, on the point of
> > using entropy to seed a PRNG, if you have several shitty entropy
> > sources and one _really_ good one, and you xor them all together, the
> > resulting output is as random as the best of them. If your hardware
> > entropy source is faulted and produces just an endless stream of
> > '001001001001001001', xoring it with a valid Golomb sequence will give
> > you something even more random than a Golomb sequence.
> >
> The proof that XORing streams can't reduce the entropy relies on the
> sources being independant. I think the issue here is we don't know if
> RDRAND is independent or not. That said, doing a SHA256 over the output
> should be sufficient (assuming the CPU doesn't see you're doing a hash and
> short circuits it).

Basically, the question is how blatently do you think the NSA could
bugger the CPU.  If you believe they can completely bugger the CPU, so
they can detect that you are implementing some explicit crypto
instruction, and change its behaviour, or they can peek ahead in the
instruction pipeline, notice an XOR, determine that one of its inputs
is an RDRAND instruction, and the other inputs come from a read of
/dev/urandom, and then modify the behaviour of the XOR, all in such a
way that the hundreds or thousands of Intel employees that need to
improve, test, debug, etc. the CPU instruction execution engine
wouldn't notice, then you might as well give up now and start
implementing your own CPU from transitors, capictors, resistors, and
wires.

While I am willing to believe they might be able to secretly subvert
or bribe Intel to subvert the RDRAND engine, in some way which no
other or very few other employees at Intel would detect, to believe
they could then do so with the entire CPU is MUCH harder to believe.

There are probably much easier things to do, such as subverting
someone in Red Hat's release engineering department, for example.  A
buggered kernel is easier to accomplish, and much harder to detect.
This is one of the reasons why implementing reproducible binary builds
is a realy good idea from a security perspective.  This allows you to
spot check that various binaries correspond to the sources that they
claim to be form.

Cheers,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140613140444.ga19...@thunk.org



Re: use of RDRAND in $random_library

2014-06-13 Thread Theodore Ts&#x27;o
On Fri, Jun 13, 2014 at 06:51:44PM +, Jacob Appelbaum wrote:
> I would expect that if the NSA wanted to take control of the RDRAND or
> the rest of the CPU, they'd dynamically update the microcode in the
> CPU to change how it behaves. To do this, it appears that they'd need
> to sign a microcode and then apply an update.

The Intel CPU doesn't support a persistent microcode update.  A
microcode update has to be uploaded after each power cycle.  That
means that a microcode hack would require that you break root first.
And if you can break root, you can just bugger the kernel or one or
more the userspace binaries.  That's going to be as detectable as
leaving an extra firmware file in /lib/firmware/intel-ucode.

I've long considered that there are so many zero-day exploits that if
the NSA decides to carry out a focused attack on a single machine, or
machines belonging to a single person, there is a very high
probability they can do whatever they want.  And this isn't a new
problems; even before the days of computers things like "black bag jobs"
were always a thing.

So I'm personally much more concerned about bulk surveillance, whether
it involves passive surveillance using fiber taps, or trojans
introduced into distribution-provided binaries.  Other people may have
their own personal sense of paranoia, but that's mine.  I happen to
think mine corresponds more with reality, but I'm sure Keith Alexander
and James Clapper would try to claim that I should be wearing tin foil
hats or something.  :-)

Cheers,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140613232132.gb5...@thunk.org



Re: Bug#754513: ITP: libressl -- SSL library, forked from OpenSSL

2014-07-18 Thread Theodore Ts&#x27;o
On Fri, Jul 18, 2014 at 02:03:06PM +0200, Johannes Schauer wrote:
> 
> maybe this will help in the future:
> 
> http://lists.openwall.net/linux-kernel/2014/07/17/235

Latest version of the patch:

http://lists.openwall.net/linux-kernel/2014/07/18/329

Of course, the syscall numbers and interface details are not set into
stone until this gets merged into mainline.

Cheers,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140718125414.ga9...@thunk.org



Re: Bug#754513: ITP: libressl -- SSL library, forked from OpenSSL

2014-07-19 Thread Theodore Ts&#x27;o
On Sat, Jul 19, 2014 at 02:27:56AM +0200, Kurt Roeckx wrote:
> > Of course, the syscall numbers and interface details are not set into
> > stone until this gets merged into mainline.
> 
> It doesn't say much about sizes you can request and what the
> result of that would be.  The getentropy() replacement seems to
> suggest 256 isn't something you want to do (when GRND_RANDOM is
> not set?).  random(4) says not to use > 256 bit (32 byte).

You can request tons of entropy; but in general it's a symptom of a
bug, either in the program or in the programmer.  (For example, the
NSS library was using fopen("/dev/urandom", 0), so the first thing it
did was suck in 4k out of the urandom pool.  Sigh...)

I seriously thought of printk'ing a warning if the program tried
grabbing more than say, 1024 bytes, but I decided that might be too
annoying/assertive.

Basically, if you request less than or equal to 256 bytes, with the
GRND_RANDOM flag not set, and assuming that the entropy pool has been
initialized, getrandom(2) will not block, and you will get all of the
bytes that you requested.

Under any other circumstances, the read() paradigm applies.  It can
return EAGAIN or EINTR, and it might not return all of the bytes you
requested.  There are a few cases where this might apply, such as
GnuPG getting enough bits to generate a long-term public key, but the
assumption is that programmers who are doing that sort of work will
know what they are doing.

Basically, the OpenBSD's position is that all application programmers
are morons, even the ones who are implementing cryptographic code (or
perhaps especially those who are implementing cryptographic code).  So
they wanted to make a getentropy(2) system call that was completely
moron-proof.  Hence their getentropy(2) call will return EIO if you
try to fetch more than 256 bytes, and EFAULT if you give it an invalid
buffer, but other than that, will never, ever fail.  (Because
applications programmers are morons and won't check return codes, and
do the appropriate handling for short reads, etc.)

I take a somewhat different philosophical position, which is that it's
impossible to make something moron-proof, because morons are
incredibly ingenious :-), and there are legitimate times when you
might indeed want more than 256 bytes (for example, generating a 4096
bit RSA key pair).  So the design is a compromise.  For "normal"
users, who are just grabbing enough bytes to seed a userspace
cryptographic random number generator (a DRBG in NIST SP 800-90
speak), getrandom(crng_state, 64, 0) is enough to seed an AES-512 RNG,
while you _should_ be checking error returns and checking for short
reads, it shouldn't be necessary, and even if the application
programmer is a moron, and doesn't check return codes, it's unlikely
they will get shot in the foot.

Realistically, if someone is moronic enough not to check return codes,
they probably shouldn't be allowed anywhere near crypto code, since
they will probably be making other, more fatal mistakes.  So in many
ways this is a very minor design point

> Shouldn't it return a ssize_t instead of an int?  I see it's
> limited to INT_MAX, but it seems in the code to return a ssize_t
> but the manpage says int.

All Linux system calls return an int.  POSIX may specify the use of a
ssize_t, but look at syscall(2).

And for the values of buflen that we're talking about, it really
doesn't matter.  We cap requests larger than INT_MAX anyway, inside
the kernel.

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140719094141.ga5...@thunk.org



Why are the gcc-*-base packages priority:required?

2014-08-08 Thread Theodore Ts&#x27;o

Potentially stupid question --- why are the gcc-4.[789]-base packages
have the priority required?  And what are they used for?

I'm fine-tuning a small kvm appliance (kvm-xfstests, as it happens), and
I'm trying to keep the root file system as small as possible.  It
appears that I can dpkg --purge the gcc-4.[789]-base packages with no
ill effects.

Am I missing something subtle that will come back and bite me?

Thanks,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/nsx4mxm75zd@closure.thunk.org



Re: Why are the gcc-*-base packages priority:required?

2014-08-08 Thread Theodore Ts&#x27;o
On Fri, Aug 08, 2014 at 07:46:24PM -0700, Russ Allbery wrote:
> Cyril Brulebois  writes:
> 
> > I'd therefore contact the relevant maintainers to make sure, probably
> > through a bug report asking for a priority downgrade.
> 
> It looks like the only remaining purpose for gcc-4.9-base is to create the
> /usr/lib/gcc//4.9.1 symlink?

and to provide the /usr/share/doc/gcc-4.9-base directory which gets
used as the targets for the symlinks:

 /usr/share/doc/gcc-4.9
and
 /usr/share/doc/gcc-4.9-multilib 

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140809033856.ga6...@thunk.org



Re: Why are the gcc-*-base packages priority:required?

2014-08-09 Thread Theodore Ts&#x27;o
On Sat, Aug 09, 2014 at 11:23:24AM +0200, Sven Joachim wrote:
> On 2014-08-09 04:27 +0200, Theodore Ts'o wrote:
> 
> > Potentially stupid question --- why are the gcc-4.[789]-base packages
> > have the priority required?  And what are they used for?
> 
> Providing the mandatory files under /usr/share/doc, all packages built
> from the gcc-4.[789] source ship a symlink under /usr/share/doc.

Sure, but this could be handled using the standard package
dependencies, could it not?  Why do these packages need to have
"priority: required"?

This forces debootstrap to drag them in, regardless whether or not
they are needed.

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140810020524.ga26...@thunk.org



Re: [FFmpeg-devel] Reintroducing FFmpeg to Debian

2014-08-10 Thread Theodore Ts&#x27;o
On Sun, Aug 10, 2014 at 12:25:33AM -0700, Andrew Kelley wrote:
> 
> High quality libraries must iterate on their API. Especially for a library
> trying to solve such a complex problem as audio and video encoding and
> decoding for every codec and format. It is unreasonable to expect no
> incompatible changes. Also both ffmpeg and libav codebases have a lot of
> legacy cruft. Libav is making a more concentrated effort at improving this,
> and the evolving API is a side-effect of that.

I beg to differ.  My definition of a "high quality library" is one
where careful design is taken into account when designing the
ABI/API's in the first place, and which if absolutely necessary, uses
ELF symbol versioning to maintain ABI compatibility.

There are plenty of "high quality libraries" (glibc, libext2fs, etc.)
where we've been able to maintain full ABI compatibility even while
adding new features --- including in the case of libext2fs, migrating
from 32-bit to 64-bit block numbers.  And if you're careful in your
design and implementation, the amount of cruft required can be kept to
a very low minimum.

- Ted





-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140810214333.ga22...@thunk.org



Re: [FFmpeg-devel] Reintroducing FFmpeg to Debian

2014-08-11 Thread Theodore Ts&#x27;o
On Mon, Aug 11, 2014 at 10:53:56PM +0200, wm4 wrote:
> 
> To be fair, FFmpeg does its own "manual" symbol versioning by appending
> increasing numbers to function names. But the real problem are not
> these functions, but public structs. Imagine a new API user fighting to
> guess which fields in AVCodecContext must be set, or which must not be
> set. Seasoned FFmpeg developers probably don't know the horror.

There are some best practices in API design; one of them is to
minimize public structs as much as possible.  Instead, have blind
pointers which are handed back by an "initialize object" function, and
then have setter/getter functions that allow you to fetch various
parameters or flags which modify how the object behaves.  This allows
you to add or deprecate new flags, configuration parameters, in a
relatively sane way.

I have this dream/fantasy where all of the energy over developing and
maintaining two forks was replaced by a spirit of cooperations and the
developers working together to design a new API from scratch that
could be guaranteed to be stable, and then applications migrated over
to use this stable, well-designed, future-proofed API.

Call me a naive, over-optimistic dreamer, but   :-)

(And, the yes, the new API probably should be a bit higher level.)

"Can we all just get along?" -- https://www.youtube.com/watch?v=1sONfxPCTU0

  - Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140811223423.ga7...@thunk.org



Re: Reverting to GNOME for jessie's default desktop

2014-08-12 Thread Theodore Ts&#x27;o
On Tue, Aug 12, 2014 at 12:26:18PM -0400, Joey Hess wrote:
> See my 1st message to this thread.

Joey,

With respect to your question re HiDPI displays and Xfce, I'm using
Xfce4 from Debian Testing on a Lenovo T540p with 3k screen, and
setting things up was fairly straight forward.  I got most of what I
needed by setting Custom DPI Setting in Settings -> Appearance ->
Fonts -> DPI.

The main pain point that I've had is that Google Chrome doesn't
support HiDPI very well.  I've manually adjusted the zoom level which
mostly compensates almost everything except the buttons on the
toolbar, but that's a problem which is independent of the desktop
environment, and won't be fixed until Aura support for Linux
arrives[1].

[1] https://code.google.com/p/chromium/issues/detail?id=143619

So that's my experience with Xfce and HiDPI displays; at least for
this hacker, it was orders of magnitude less painful than dealing with
GNOME.  :-)

Cheers,

- Ted

P.S.  I don't have the double suspend problem; it looks like these
days, xfce4-power-manager doesn't do any suspending at all, and it's
all handled by systemd.  So the main pain point there was not waiting
suspend on lid close when I was on AC power.  I found the following
which worked around the issue for me (except that I haven't been able
to make the udev rules work, but I don't mind running
/usr/local/bin/suspend-prevent by hand when I go on and off AC mains.)

http://nrocco.github.io/2014/06/05/suspend-prevent-systemd.html


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140812191102.ga7...@thunk.org



Re: Reverting to GNOME for jessie's default desktop

2014-08-12 Thread Theodore Ts&#x27;o
On Tue, Aug 12, 2014 at 09:33:03PM -0007, Cameron Norman wrote:
> >With respect to your question re HiDPI displays and Xfce, I'm using
> >Xfce4 from Debian Testing on a Lenovo T540p with 3k screen, and
> >setting things up was fairly straight forward.  I got most of what I
> >needed by setting Custom DPI Setting in Settings -> Appearance ->
> >Fonts -> DPI.
> 
> Did you have to edit anything else as well? I wonder if there could be some
> installer hook that detects DPI and adjusts these settings automatically...

As far as screen resolution was concerned, that was pretty much about it.

As near as I can tell, My Thinkpad doesn't actually export any EDID or
DDC information from which you could get the DPI information.  So
unless you wanted to use a database of common laptop signatures, cross
check that with the screen resolution (the T540p has different DPI's
depending whether you upgrade LCD panel, and to what resolution), it's
not clear to me how practical it would be to automate this, due to the
problems with the "detect the DPI" step.

Assuming you can detect the DPI (and life gets entertaining the user
has an external monitor hooked up --- so you need to decide between
using the DPI of the LCD panel or the external monitor), using the
command-line toole xfconf-query to actually adject the setting is
pretty simple.

> >So that's my experience with Xfce and HiDPI displays; at least for
> >this hacker, it was orders of magnitude less painful than dealing with
> >GNOME.  :-)
> 
> I would appreciate if you went into a little detail on what pain you had
> with GNOME for comparison purposes.

It's the usual frustrations, that have been aired a million times
before[1].  Struggling with the GNOME equivalent of the Windows Registry,
wanting to use a 2D workspace, struggling as random GNOME extensions
break when GNOME releases a new version, etc., etc., etc.

[1] http://felipec.wordpress.com/2013/06/12/the-problem-with-gnome-3/

Basically, I can be effective and efficient with Xfce.  I can't say
the same about GNOME, as a power user.  Which is OK, since I'm clearly
not the target audience for the GNOME project.  Oh, well

Cheers,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140812224439.gj12...@thunk.org



Re: Reverting to GNOME for jessie's default desktop

2014-08-12 Thread Theodore Ts&#x27;o
On Wed, Aug 13, 2014 at 01:44:43AM +0200, Michael Biebl wrote:
> 
> If you increase the DPI settings under XFCE following the instructions
> posted by Ted, none of the UI elements besides text are scaled, no
> scaled cursor, no scaled icons, no scaled window decorations, etc.

That's a fair comment.

The UI elements which I care about, which are the icons in the panels
are scaled with the size of the panels.  The same is true with the
icons in the notification panel and window buttons.

There are are a few UI elements, such as the window decorations which
aren't scaled, but I actually prefer them to be small.

The only UI elements where it has really bothered me has been in the
Chrome Browser, and that problem is DE independent, since Gnome uses
Aura and not GTK.

The bottom line is that XFCE is actually pretty usable even on a HiDPI
screen.

Cheers,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140813013737.ga5...@thunk.org



Re: Reverting to GNOME for jessie's default desktop

2014-08-13 Thread Theodore Ts&#x27;o
On Tue, Aug 12, 2014 at 05:17:10PM -0700, Octavio Alvarez wrote:
> 
> That's why I see GNOME 3 as a tablet environment. I'd love to use a
> tablet with GNOME 3. But using it in a desktop just reduces the
> communication between me and my computer. What is Debian?

This is actually the core (hidden) question which I think is driving
the whole debate.  Ignoring the claims of Debian as the "universal
operating system", what audience does Debian what to target by default
in its installer?

Is it the power user?  Is it developers?  Is it the typical users I've
seen on Launchpad, such that I've largely stopped dealing with bug
reports there --- far too many Ubuntu users can't file a proper bug
report, and then other Ubuntu users Google their symptoms, and drop in
irrelevant observation for problems that superficially have the same
symptoms, but are about something else entirely?

If you want Debian to target people who like Windows 8, or maybe Mac
OS, then GNOME or Unity is the right default DE.  If you don't care
about servicing the needs of your current user base, and instead want
to chase after (hopefully) potentially new users, the way the GNOME
project has done, by all means, go with GNOME.

I have a slight bias towards XFCE, but honestly, it's for primarily
selfish reasons --- I don't want the sort of bug reports that
Launchpad gets; the vast majority of bugs filed with the BTS are by
people with whom I can work with to fix bugs, and as a result packages
such as e2sprogs get better for everyone.  And so very selfishly, I
don't want that signal to get drowned out by the noise which is
Launchpad, and so I'd prefer that Ubuntu continue to target the
Windows 8 and Mac OS user market.

It may be that Debian would like to go after the same thing.  If so,
I'll be sad, but given that I can always install some other DE, at the
end of the day it doesn't make that much difference to my personal
workflow, since I can always override the default.

Cheers,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140813134330.ga28...@thunk.org



Re: Reverting to GNOME for jessie's default desktop

2014-08-13 Thread Theodore Ts&#x27;o
On Wed, Aug 13, 2014 at 04:09:25PM +0200, Ansgar Burchardt wrote:
> To quote a fairly famous Linux user who eventually came back from XFCE
> to GNOME: "But I'm actually back to gnome3 because with the right
> extensions it is more pleasant."[1]
> 
> But I'm not sure if he qualifies as a power user or is just another guy
> who like Windows 8 or OS X. *scnr*

I wait with bated breath when the next GNOME version update breaks all
of his extensions (as GNOME version updates are wont to do).  And then
when he complains, hopefully he won't swear too much, when he's told
that it's his fault for depending on GNOME extensions, since there is
zero guarantees of compatibility given by GNOME.

(This is Linus "thou shalt not break userspace" Torvalds we're talking
about.  And the GNOME extensions are the diametric opposite of that
philosophy.)

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140813194109.gb28...@thunk.org



Re: Reverting to GNOME for jessie's default desktop

2014-08-13 Thread Theodore Ts&#x27;o
On Wed, Aug 13, 2014 at 10:18:49PM +0200, Matthias Klumpp wrote:
> Well, Linus' extensions won't break because GNOME updates them with
> every release and ships them with the official GNOME release.

>From the README found in "gnome-shell-extensions" sources:

GNOME Shell Extensions is a collection of extensions providing additional
and optional functionality to GNOME Shell.
Since GNOME Shell is not API stable, extensions work only against a very
specific version of the shell, usually the same as this package (see
"configure --version"). Also, since extensions are built from many
individual contributors, we cannot guarantee stability or quality for any
specific extension.
For these reasons, distributions are advised to avoid installing or 
packaging
this module by default.

So again, it'll be interesting to see how many extensions work when
3.14 gets released, and how many just break or just silently
disappear

Of course, not anything which is officially in GNOME is guaranteed to
stick around, either.  Functionality which is part of "official" GNOME
have commonly disappeared in a version "upgrade" as well, and the
Gnome Shell Extensions has a lesser guarantee of stability than
features in core GNOME.

At least for me, it's a case of "Fool me once, shame on you; fool me
twice, shame on me."

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140813205924.gc28...@thunk.org



Re: Reverting to GNOME for jessie's default desktop

2014-08-13 Thread Theodore Ts&#x27;o
On Wed, Aug 13, 2014 at 06:34:46PM -0400, Hashem Nasarat wrote:
> 
> The following "first party" extensions are developed along with
> gnome-shell and are updated for each gnome-shell release.
> https://git.gnome.org/browse/gnome-shell-extensions/tree/extensions
> 
> Extensions on https://extensions.gnome.org/ are the ones that are often
> late with updating based on new releases.

I note that the workspace grid extension (which seems to be the only
way you can get a 2-dimensional workspace) is not a "first party"
extension.  So if you depend on it, you are more at risk than usual if
you use GNOME

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140813234404.gd28...@thunk.org



Re: systemd, again (Re: Cinnamon environment now available in testing)

2014-09-08 Thread Theodore Ts&#x27;o
On Sat, Sep 06, 2014 at 09:39:05AM -0700, Russ Allbery wrote:
> Note also that a few of those things (udev, adduser, and
> libdevmapper1.02.1 for example) are likely to be on any non-chroot system
> already since they're either dependencies of other things (such as grub
> for libdevmapper1.02.1) or are already in use regardless of the init
> system (udev).  So for the case of a small embedded system that's
> nonetheless running the full kernel + bootloader stack, I suspect the
> delta is even smaller.

I can give a hard data point.  A month ago, debootstrap in Jessie was
still giving you a sysvinit based system.  I build a VM that has a
minimal debootstrap, with a very small set of packages[1], plus
xfstests.  In early August, this VM was 54 megabytes

[1] 
https://git.kernel.org/cgit/fs/ext2/xfstests-bld.git/tree/kvm-xfstests/test-appliance/packages

This past weekend, I spent a good part of the weekend updating
kvm-xfstests to use systemd, since debootstrap now forces systemd on
you, and so I decided to bite the bullet and convert to systemd.

This was not quite trivial, because I depended on being able to run
xfstests in /etc/rc.local, and serial console getty would start up
before /etc/rc.local had finished, and then HUP the entier xfstests
run.  Still, after fighting with the sysvinit unit scripts, I finally
managed to get it all working again.

The resulting VM image was 62 megabytes[2], or about 15% larger.
Since the VM image generation is completely automated[3], I'm
confident that this is an apples-to-apples comparison.

[2] https://www.kernel.org/pub/linux/kernel/people/tytso/kvm-xfstests/
[3] 
https://git.kernel.org/cgit/fs/ext2/xfstests-bld.git/tree/kvm-xfstests/test-appliance/gen-image

Cheers,

- Ted

P.S.  Note what is required to be fully GPL compliant when
distributing a VM image[4].  You need to be able to identify the
precise sources for *all* of the GPL'ed packages used for a particular
VM image, and it's something that most people don't bother to do.  To
(loosely) quote Bradley Kuhn from his recent talk at LinuxCon, "it's
all too easy to accidentally violate the GPL; I'm sure I've done it
from time to time".

[4] ftp://ftp.kernel.org/pub/linux/kernel/people/tytso/kvm-xfstests/README


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140908183223.ga6...@thunk.org



Re: Trimming priority:standard

2014-09-12 Thread Theodore Ts&#x27;o
On Thu, Sep 11, 2014 at 07:41:19PM -0700, Russ Allbery wrote:
> 
> > * telnet: dead for 19 years.  Used only by those who misspell 'nc' and hope
> >   for no 0xff bytes.
> > * wamerican: what use is a wordlist with no users?
> 
> Both of these fall under the "anyone familiar with UNIX would go 'where
> the hell is X' if the package isn't installed" provision, I think.  Yes,
> nc is better than telnet, but telnet is part of a *lot* of people's finger
> memory, and I think removing the package violates the principle of least
> surprise here.  It's not very large.

A large number of these packages would fall into this category.
Arguably this would include dc and m4.  (Trivia fact: dc predates the
C programming language, and it has macros, conditionals, and looping
constructs.  :-)

That being said, if there are Debian users who are not Unix-heads,
they aren't going to miss any of these.  What if we create a tasksel
task called "Unix" that installs these traditional Unix commands from
the BSD 4.x era?  It would include dc, m4, /usr/dict/words, telnet,
etc.

> wamerican provides /usr/share/dict/words, which is widely used in a
> variety of strange places you wouldn't expect, like random test suites.

True, but that's a developer thing.  The argument can be used for m4
and dc as well --- that they can be used all sorts of places you don't
expect. 

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140912135601.ga23...@thunk.org



Re: Trimming priority:standard

2014-09-12 Thread Theodore Ts&#x27;o
On Fri, Sep 12, 2014 at 03:12:47PM +0100, Simon McVittie wrote:
> 
> (Admittedly, cron has to be Priority:important anyway, to support
> logrotate - until/unless someone adds a logrotate.timer for systemd, and
> makes its cron job early-return if systemd is pid 1.)

It's inevitable that systemd will subsume cron, with an incompatible
configuration file format.  :-)

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140912171230.gb23...@thunk.org



Re: Trimming priority:standard

2014-09-12 Thread Theodore Ts&#x27;o
One thought... there will probably be trademark concerns with "unix".[1]
So we might have to choose a name for the tasksel task to be someting
like "unix-like".

[1] http://www.unix.org/trademark.html

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140912171923.gc23...@thunk.org



Re: bash exorcism experiment ('bug' 762923 & 763012)

2014-10-11 Thread Theodore Ts&#x27;o
On Sat, Oct 11, 2014 at 10:37:26AM -0700, Russ Allbery wrote:
> > You have convinced me that in this case it's going to have to be that
> > way, so my prejudices notwithstanding.  I've rationalised the pain away
> > by deciding it's no so bad as any competent programmer could see that is
> > it only tested to 190 regardless of what the standards say.
> 
> Yeah, I do get that discomfort.  I would love for Policy to be more
> accurate about what's actually happening in the archive.  I just don't
> have much (any) time at the moment to try to push the wording in that
> direction.

I assume that posh meets the strict definition of 10.4.  And so
without actually changing policy, someone _could_ try setting /bin/sh
to be /bin/posh, and then start filing RC bugs against packages that
have scripts that break.   Yes?

Given that the freeze is almost upon us, I could see how this might be
considered unfriendly, but if someone wanted to start filing bugs (at
some priority, perhaps RC, perhaps not) after Jessie ships, we could
in theory try to (slowly) move Debian to the point where enough
scripts in Debian worked under /bin/posh that it might be possible to
set it at a release goal, for some future release.   Yes?

Now, this might be considered not the best use of Debian Developers'
resources, and which is why it might be considered bad manners to do
mass bug filings, particularly mass RC bug filings at this stage of
the development/release cycle.

But if individual Debian developers were to fix their own packages, or
suggest patches as non-RC bugs, there wouldn't be any real harm, and
possibly some good (especially for those people who are very much into
pedantry, and don't mind a slightly slower system --- but if a user
wants to use /bin/posh, that's an individual user's choice :-)

Cheers,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141011215714.ga21...@thunk.org



Re: dgit and upstream git repos

2014-10-11 Thread Theodore Ts&#x27;o
On Tue, Oct 07, 2014 at 08:26:45AM -0700, Russ Allbery wrote:
> I understand why you feel this way, particularly given the tools that
> you're working on, but this is not something I'm going to change as
> upstream.  Git does not contain generated files, and the tarball release
> does, because those two things are for different audiences.  Including the
> generated files in Git generates a bunch of churn and irritating problems
> on branch merges for no real gain for developers.  Not including them
> makes it impossible for less sophisticated users to deploy my software
> from source on older systems on systems that do not have Autoconf and
> friends installed for whatever reason.

The flip side is that you can get burned by people trying to compile
from your git tree on either significantly older or significantly
newer system than what you typically use to develop against, and if
autoconf and friends have introduced incompatible changes to the
autoconf macros used in your configure.in.

I've gotten burned by this way many, many times, which is why I
include the generated configure script with respect to *my*
development system.  That way, developers running on, say, a RHEL
system, or developers running on some bleeding edge Gentoo or sid or
experimental branch won't lose so long as they don't need to modify
configure.in, and so, need to generate a new configure script.

It does mean that sometimes people lose because they need to build on
some new system, and so they need a new version of config.guess or
config.sub, and instead of simply dragging in a new version of those
files, they try to regenerate *everything* and then run into the
incompatible autoconf macro change.  But if I forced people to run the
autoreconf on every git checkout, they would end up losing all the
time anyway...  This way they only lose when they are trying to
develop on some new OS class, such as ppcle, or make a configure.in
change *and* when the autoconf macros become backwards incompatible.

(Or maybe the answer is I should stop doing as many complicated,
system-specific stuff in my configure.in --- but given that I'm trying
to make sure e2fsprogs works on *BSD, MacOSX, and previously, Solaris,
AIX, and many other legacy Unix systems, I need to do a lot of
complicated low-level OS specific tests, and those are the ones which
have historically had a very bad track record of failing when the
autoconf/automake/libtool/gettext developers made changes that were
not backwards compatible.)

> I say this not to pick a fight, since it's totally okay with me that you
> feel differently, but to be clear that, regardless of preferences, the
> reality that we'll have to deal with is that upstreams are not going to
> follow this principle.  I know I'm not alone in putting my foot down on
> this point.

Indeed, there are no easy answers here.  I personally find that
resolving branch conflicts (which you can generally do just be
rerunning autoconf) is much less painful than dealing breakages caused
by changes in the autoconf macros, especially since it's fairly often
that people are trying to compile the latest version of e2fsprogs on
ancient enterprise distros.

But of course, your mileage may vary, depending on your package, and
where your users and your development community might be trying to
compile your package.  (I have in my development community Red Hat
engineers who are forced to compile on RHEL, as part of their job.  :-)

Cheers,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141011224129.gb21...@thunk.org



Re: A concerned user -- debian Guidelines

2014-11-10 Thread Theodore Ts&#x27;o
On Mon, Nov 10, 2014 at 02:34:33PM +0100, Matthias Urlichs wrote:
> The Wanderer:
> > Unfortunately, as far as I can tell, no one seems to be remotely
> > interested in trying to address or discuss that disagreement directly...
> 
> The problem is that, apparently, any 'support' short of "remove systemd
> from Debian NOW" will not shut up the most vocal detractors.

There will always be some vocal detractors, and yes, there will be
absolutely no way to make the most radical people shut up.

Part of the problem is that there are people who are working on making
things less painful for those people who don't want to support
systemd, and even for people like my self who have resigned myself (or
at least am willing to use systemd on my laptop for now), but which
under no circumstances are willing to use GNOME[1].  However, these
efforts are on a best efforts basis, and no one is willing to make any
public commitment about what will and won't work in Jessie or
post-Jessie --- which is fair enough, because because this is a
volunteer project, and so it's not like any promise we could really
make anyway --- and if the GNOME folks yolk themselves even more
firmly to some new systemd extensions (for example, perhaps a future
version of network manager will blow up unless you use the systemd
replacement for cron or syslog), that's an upstream change, and we
can't rewrite all of upstream.

However, at this point, given that Jessie is frozen, I think it will
be possible soon to be able to make some statements about what will
and won't work with Jessie, vis-a-vis using either systemd or any
alternative init system, and even give instructions if someone wants
to install Jessie and then switch to an alternative init system.  And
I suspect even more importantly for many people, which alternative
desktops will work with systemd, and how to work around various
breakages that the switch to systemd might have engendered.  If we can
tell people that it's OK, Jessie isn't going to force you to switch to
GNOME 3, and if you want your text log files, you can keep your text
log files, etc., I think there will be a people (not the most vocal
detractors, admittedly) that will probably be reassured and less
fearful about what the New Systemd World Order will bring.

It may be that the release notes would be a very fine place for some
of this information, and it might be useful for dispelling many of the
myths that people who might not be using testing, and who know that
while things did get rocky for a bit, XFCE and other alternative
desktops work very well, thank you very much, will hopefully feel much
more reassured.

At that point, I suspect the remaining fears about what may break post
Jessie, as sytemd starts taking over even more low-level system
components, and perhaps all we can do there is some maintainers can
make declarations about what they are and aren't willing to do with
their volunteer time.  The future is always uncertain, and but I think
if we assume that people are fundamentally trying to trying to do the
right thing, and there will be people working to make most use cases
work at least as well --- and hopefully even better --- again, that
will hopefully reassure many people that Debian is really striving to
be a Universal OS, and not just a GNOME/Core OS, and that while some
things may break for a while, as long as their are volunteers
interested in fixing things --- and if not at Debian, where else? ---
in the long run All Will Be Well.

Cheers,

- Ted


[1] Well, I'd be willing to invest time to try GNOME again when 2-D
workspaces are supported as a first class feature (i.e., is something
where developers will try to avoid randomly breaking this feature on
every new GNOME release --- and indeed, the extensions which provided
for a 2-D workspace broke *again* with the most recent GNOME release,
and last I checked, were still not fixed.)  That's actually the
primary reason why I'm sticking with XFCE, BTW.  If I were reasonably
assured that GNOME wouldn't break my workfow on every release, I'd
certainly consider switching back.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141110162614.ga...@thunk.org



Re: Being part of a community and behaving

2014-11-13 Thread Theodore Ts&#x27;o
On Thu, Nov 13, 2014 at 08:25:57AM -0800, Russ Allbery wrote:
> What do you think we should have done instead?  debian-devel was becoming
> the standing debian-canonical-is-evil vs. debian-systemd-sucks standing
> flamewar.  (I think people are already forgetting the whole Canonical is
> evil flamewar that was happening at the same time, with the same degree of
> vitriol that is now being levelled at systemd.)

That doesn't match my perception of the history; but part of this may
have been that the vitriol level escalated significantly once the TC
announced they were going involve itself in the debate, and it doesn't
look like it has gotten any better ever since.

That being said, I am sure that the TC got involved with the best
intentions, and most of the DD's involved in the discussions were all
united in their passion for wanting the best for Debian (even if they
agreed on very little else, at least on the systemd mail threads :-).

If only everyone could really internalize this belief; I think it
would make these discussions much less painful.

> I think people have an idealistic notion here that consensus will always
> emerge eventually, and it's easy at this point in the process to
> sugar-coat the past and forget how bad it was.  Please, make a concerted
> effort to put yourself into the mindset the project was in during the fall
> of 2013.  It's always easy to see, in hindsight, the cost of the option
> that was taken; it's harder to see the cost of the option that was not
> taken.
> 
> Personally, I strongly suspect that we could have waited until 2020 and
> there still wouldn't be any consensus.  And that has its own risk.

I have a different belief about the future, but (a) there was no way
to know whether things would have gotten worse back in Fall 2013, and
(b) there's no way any of us can know for sure what the future will
bring, or what would have happened if we had taken an alternate path.
All we can do is to go forward, as best as we can.

Because regardless of how this GR is settled, it doesn't really answer
the question about the use of all of the other pieces of systemd; or
at least, I don't think that any of the options are the equivalent of
a blank check adoption of systemd-*, whether it be systemd-networkd,
systemd-resolved, systemd-consoled, etc.  And it sure would be nice if
we don't have the same amount of pain as each of these components get
proposed.  (My personal hope is that if they are optional, as opposed
made mandatory because GNOME, network-manager, upower, etc. stops
working if you don't use the latest systemd-*, it won't be that bad
going forward.)

Regards,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141113171127.ga26...@thunk.org



Re: Being part of a community and behaving

2014-11-16 Thread Theodore Ts&#x27;o
On Sun, Nov 16, 2014 at 09:02:12AM -0500, The Wanderer wrote:
> 
> I would, for example, have classified the discussions / arguments in the
> "systemd-sysv | systemd-shim" bug which appears to have recently been
> resolved by TC decision as being an example of what I thought was being
> referred to by the original "bitter rearguard action" reference:
> fighting over the implementation details in an attempt to maintain as
> much ground for non-systemd as possible.

I was really confused that this needed to go to the TC; from what I
could tell, it had no downside systems using systemd, and it made
things better on non-systemd systems.  What was the downside of making
the change, and why did it have to go to the TC instead of the
maintainer simply accepting the patch?

If this is an examble of "bitter rearguard action", my sympathies
would be on thouse who are trying to keep things working on
non-systemd systems

Am I missing something?

 - Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141116221147.ga9...@thunk.org



Re: Being part of a community and behaving

2014-11-17 Thread Theodore Ts&#x27;o
On Mon, Nov 17, 2014 at 10:21:13AM +0100, Marco d'Itri wrote:
> On Nov 17, Steve Langasek  wrote:
> 
> > > This is what many still (retorically) wonder about: we the systemd 
> > > maintainers did not reject that change,
> >   https://bugs.debian.org/cgi-bin/bugreport.cgi?msg=15;bug=746578
> Please try to be less selective in your quoting: the issue was still 
> being discussed.

May I gently suggest that tagging a bug "wontfix" has the unfortunate
tendency to perpetuate the perception that the systemd proponents
don't really care about any fallout that systemd might cause on the
rest of Debian --- ESPECIALLY if it's still "open for discussion?

Especially without any discussion or explanation by any other systemd
maintainer?

It may not be accurate, but right now, given the feeling of hurt on
all sides of the issue, a bit more communication instead of a blunt
"tags + wontfix" without any word of explanation might have
contributed to a more productive amount of discussion.

Best regards,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141117232416.ga8...@thunk.org



udeb and data.tar.xz files?

2012-05-14 Thread Theodore Ts&#x27;o

I'm very confused about what the status is regarding udeb and
data.tar.xz.  Are they allowed or not?  It seems at the moment that
dh_builddeb is creating them by default, and lintian is complaining that
this is an error.

I've done a search through the web and debian-devel and it looks like
things are in transition.  That is, there are people hoping that the
udeb infrastruction will support xz compression, and even an assertion
that the lintian complaint is not obsolete and those references *appear*
newer than references to the udeb-uses-non-gzip-data-tarball error.

Or I could be conservative and hard code something explicit in
debian/rules to force dh_builddeb to pass -Zgzip to dpkg-deb, but that
will be ugly, and risks being obsolete at some point.

Can someone tell me what's up, and ideally update the text at:

http://lintian.debian.org/tags/udeb-uses-non-gzip-data-tarball.html

... so developers can know what to do?

Many thanks,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/e1su0kh-0004xb...@tytso-glaptop.cam.corp.google.com



multiarch, required packages, and multiarch-support

2012-06-14 Thread Theodore Ts&#x27;o

If a required package (such as e2fslibs, which is required by e2fsprogs)
provides multiarch support, then Lintian requires that the package have
a dependency on the package "multiarch-support"[1].

However, this causes debcheck to complain because you now have a
required package depending on a package, multiarch-support, which is
only at "standard" priority[2] 

[1] 
http://lintian.debian.org/tags/missing-pre-dependency-on-multiarch-support.html
[2] http://qa.debian.org/debcheck.php?dist=unstable&package=e2fsprogs

What is the right thing to do to resolve this mutually irreconcilable
set of complaints from either Lintian or debcheck?

Thanks,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/e1sfnez-0006hn...@tytso-glaptop.cam.corp.google.com



Clarification on the Origin: field in the Patch Tagging Guidelines?

2012-06-15 Thread Theodore Ts&#x27;o
Hi,

I'm trying to understand a better way of using the Origin: field as
specified by DEP-3.

I'm currently using something like this:

Origin: http://git.kernel.org/?p=fs/ext2/e2fsprogs.git;a=commitdiff;h=8f00911a21
f4e95de84c60e09cc4df173e5b6701

since DEP-3 seems to strongly encourage a URL.  But this seems really
ugly and painful to me.

>From reading the DEP-3, it mentions the use of the Commit: identifier,
but doesn't give any examples of how this would be done.  Would
something like this be acceptable instead?

Origin: upstream, Commit:8f00911a21

I assume as long as there is clear documentation in where to find the
canonical upstream repository (perhaps in debian/README.source or
debian/copyright) this would be considered acceptable?   Or maybe it
would be better to include a new repository designator in the Patch
Tags, i.e.:

Upstream-VCS: git://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git
VCS-Branch: debian

What do people think?

- Ted

P.S. One of the things I'm thinking about doing is writing a script which
automatically generates the debian/patches directory from the git
repository.  So when I specify the base release (i.e., v1.42.4), it will
do something like git format-patch, but in a debian/patches Quilt 3.0
format.  That way I don't have to replicate the patches twice in my git
tree (once as the real commit, and once in the commits which create the
debian/patches/* files).


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/e1sfwkr-0008am...@tytso-glaptop.cam.corp.google.com



Re: EFI in Debian

2012-07-05 Thread Theodore Ts&#x27;o
On Wed, Jul 04, 2012 at 12:51:01PM +, Tanguy Ortolo wrote:
> Tanguy Ortolo, 2012-07-04 14:13+0200:
> > A blog post explaining how to set up Debian to boot via UEFI:
> >http://tanguy.ortolo.eu/blog/article51/debian-efi
> > A message to this list detailing the UEFI boot procedure and what is
> > required to support it:
> >
> >http://lists.debian.org/debian-devel/2012/01/msg00168.html
> 
> (basically, we already have everything needed to boot via UEFI (not with
> SecureBoot of course, though), only the Debian installer does not
> support it)

James Bottomly has been doing some work to support Secure Boot.  See:

  http://lwn.net/Articles/503820/

His work was done specifically to help other community distributions
beyond Ubuntu and Fedora.  We (the LF Technical Advisory Board) are
currently investigating if there is more the LF can do to support
distributions.  We're not in the position to promise anything just
yet, but if Debian has any suggestions of things that you might like,
do please let me know.

Regards,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120706022737.ga9...@thunk.org



Re: Anybody using quilt?

2013-09-14 Thread Theodore Ts&#x27;o
On Mon, Aug 26, 2013 at 03:21:00PM +0200, Svante Signell wrote:
> 
> Is any of you Debian maintainers/developers using guilt (qit+quilt)for
> patch management/developement? Is it good or bad? If you are not, what
> do you use?

I use guilt for managing ext4 development for the upstream kernel.
It's perfect for my workflow.

See the section "The ext4 patch queue" on the wiki page:

https://ext4.wiki.kernel.org/index.php/Ext4_patchsets

for more details.

(Sorry for the late replay; I'm behind on some of my mailing lists.)

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130915014810.ga2...@thunk.org



Re: Bug#727708: tech-ctte: Decide which init system to default to in Debian.

2013-10-30 Thread Theodore Ts&#x27;o
On Mon, Oct 28, 2013 at 06:21:27PM -0700, Russ Allbery wrote:
> Well, I've said this before, but I think it's worth reiterating.  Either
> upstart or systemd configurations are *radically better* than init scripts
> on basically every axis.  They're more robust, more maintainable, easier
> for the local administrator to fix and revise, better on package upgrades,
> support new capabilities, etc.

Can you please go in to more detail why you believe this was true?

The lsat time I played with Upstart, I saw a lot of policy moved from
shell scripts into C code (which I would have to edit and recompile)
if I wanted to change things.  I also was extremely frustrated with a
massive lack of documentation, where at least with shell scripts I
could read the scripts to understand what was going on.

Maybe things have changed, but that was my impression with both
Systemd and Upstart (and policykit, and consolekit, etc. all of which
has caused me no end of frustration).

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131031004110.ga9...@thunk.org



Re: Bug#727708: tech-ctte: Decide which init system to default to in Debian.

2013-10-30 Thread Theodore Ts&#x27;o
On Wed, Oct 30, 2013 at 06:18:29PM -0700, Russ Allbery wrote:
> I suspect you and I have a root disagreement over the utility of exposing
> some of those degrees of freedom to every init script author, but if you
> have some more specific examples of policy that you wanted to change but
> couldn't, I'd be interested in examples.

It's not necessarily the init script author who might want the degrees
of freedom, but the local system administrator.

The most basic is the idea that whether you can control (via shell
scrpit fragments) whether or not a service should start at all, and
what options or environments should be enabled by pasing some file.
The fact that we can put that sort of thing in configuration files
such as /etc/default/*, for example.

Yes, yes, you can do this via if you use system V init scripts scripts
in backwards compatibility mode, but you've argued that we should be
moving briskly away from that.  In which case system administrators
will need to hand-edit the services files by hand, which will no doubt
increase the chances of conflicts at package upgrade time, compared to
if the configuration options were isolated away in files such as
/etc/default/rsync (for example).

> I realize that
> the local administrator may have other goals, and they should have ways of
> achieving them, but both systemd and upstart support running SysV init
> scripts for those cases.

If the package does not ship a SysV init script (which is your ideal
long-term outcome), that may not be very practical option for a system
adminsitrator who may need to recreate a SysV init script, especially
if the service file is rather complicated, or is using some of the
more advanced feature of systemd/upstart.

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131031015053.gb9...@thunk.org



Re: Bug#727708: tech-ctte: Decide which init system to default to in Debian.

2013-10-31 Thread Theodore Ts&#x27;o
On Thu, Oct 31, 2013 at 01:41:53AM -0700, Steve Langasek wrote:
> I'm surprised by this comment.  Very little policy is actually encoded in
> upstart's C code; in fact, the only policy I can think of offhand that is is
> some basic stuff around filesystems, which, aside from some must-have kernel
> filesystems without which it can't boot the rest of the system, should be
> entirely overrideable via /etc/fstab.  Perhaps you could expand on what
> policies you saw a need to change?

The details are a bit fuzzy, because this was a quite a while ago,
when Upstart was first introduced into Ubuntu, and it was so
frustrating that it was what caused me to abandon Ubuntu and switch
back to Debian.  The high bit was I couldn't get a particular service
to start (it might have been bind, or some such), and I had no idea
how to debug the darned thing.  With shell scripts, it's possible to
insert "echo debug 1 $variable >> /tmp/debug.log" to figure out what's
going on.  With upstart, I had no way of figuring out what was going
on, and why it was failing, and the "no user-serviceable parts inside"
was extremely frusrtating.

I'm sure part of the problem was lack of documentation.  That seems to
be a common theme with many of these "higher level language" systems.
They may be powerful if you know the magic XML file to edit (in the
case of policy kit), but it took me several hours before I figured out
even something as simple as "say 'yes' to for all authorization
questions", which is how I still run to this day, because (a) the
default of prompting for the root password in popup windows all the
time was too painful, and (b) trying to figure out how to XML
language, and all of the triggeers, etc., was ***far*** too painful.
One of the nice things about shell scripts is that they are far more
self-documenting, and easier to debug, than XML and other
'higher-level' configuration files (at least for this dumb kernel
hacker :-).

So hopefully that is something the technical committee will take into
account --- how well things are documented, both in terms of a
comprehensive reference manual, and a tutorial that helps people with
common things that system administrators might want to do.  The
docuementation you pointed to at http://upstart.ubuntu.com/cookbook/
is something I wish I had access to when I first was forced to use
Upstart; maybe if Upstart was as polished back then, I might not have
given up on Ubuntu in disgust.

Regards,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131031112012.gc9...@thunk.org



Re: Bug#727708: tech-ctte: Decide which init system to default to in Debian.

2013-10-31 Thread Theodore Ts&#x27;o
On Wed, Oct 30, 2013 at 10:52:15PM -0700, Russ Allbery wrote:
> >> You can do quite a bit with the hooks that are part of the specification
> >> of both types of files.  For example, logic that you may add to control
> >> whether the service should start at all can be implemented by adding a
> >> pre-start stanza to the upstart configuration.
> 
> > ExecStartPre=/bin/false
> 
> > will make the service be considered failed.  The ExecStartPre line can
> > of course be an executable that implements more checking or logic.
> 
> Ah, thank you!  I got lost in the systemd.* man pages and didn't find the
> systemd.service one somehow (even though it's right there listed first;
> sigh).

I found the systemd man pages, and came across the definition
ExecStartPre, but I didn't make the connection that returning false
would be sufficient to stop the daemon from starting.  It was there,
but that's the difference between a reference manual and a
tutorial/cookbook -- a reference manual won't necessarily explain the
implifications of "the unit fails".

The upstart documentation, from my brief examination, seems to be much
more approachable for someone who is starting from ground zero.

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131031112352.gd9...@thunk.org



Re: Bug#727708: tech-ctte: Decide which init system to default to in Debian.

2013-10-31 Thread Theodore Ts&#x27;o
On a different subject, which I don't think has been raised so far ---
has the Debian maintinares for the upstart package made any comments
about bug fixes or code contributions from Debian Developers who are
personally opposed to being forced to sign copyright assignment
agreements, or for whom their employers forbid them from signing
copyright assignment aggrements (both of which apply to me).  If I
submit a bugfix as part of a bug report, will the Upstart maintainers
reject it out of hand if I have not executed a copyright assignment
with Canonical, or will they be willing to consider either carrying it
as a local Debian patch, and/or rewrite the commit and submit it
upstream to Canonical?

I don't think copyright assignment is a concern which afflicts
Systemd, although there is a related concern which is that the
upstream systemd developers appear to have a very strong point of
view, and if there is some change which is needed for Debian or
Debian's users, and it conflicts with their point of view, I could
imagine situations where the Debian maintainers for systemd might need
to carry a Debian-specific change, perhaps indefinitely.

Is this something that has been considered by the maintainers, and is
this something which is important to other DD's and the
tech-committee?

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131031114537.ge9...@thunk.org



Re: Linux Future

2013-01-22 Thread Theodore Ts&#x27;o
On Tue, Jan 22, 2013 at 03:05:58PM +0100, Josselin Mouette wrote:
> Yet full of misinformation, like the idea that using D-Bus makes a
> service less scriptable (while the reality is a complete opposite), or
> that configuration files are less human-readable than shell scripts.

My biggest complaint about D-Bus is that it's not well documented.
One of the really strong bits of the Unix Way is the strong push to
make sure everything is documented, even if a very fragmented way, in
a man page, and since everything is done using man pages, it's
relatively easy to find things.  This is critically important when
you're writing a shell script.

One of the big things which is incredibly frustrating with the D-Bus
interfaces is that they aren't documented; and if they are documented,
it's not obvious where.  So more than once, I've been reduced to
trying to figure out some python code, or C++ code, etc., just to
figure out how to force networkmanager from asking me for a password
every single time I moved to another random Wifi access point.

I finally figured out the magic file that I needed to edit to so that
PolicyKit would return "Yes, damn you, get the f*ck out of my way",
but it was not at all well documented, nor in a place that would be
easy to find.  And what I chose may have not been secure, but I got
tired of figuring out what was the right way to fix the damned thing,
and I chose the simplest way so that I would be asked for a password
whenever I tried to add a new printer, or a new wifi network.

The irony, of course, is that PolicyKit/ConsoleKit was supposed to
make things more secure.  But at least for my desktop, I've decided to
run things in a less secure way just because it was too painful to
figure out how to make it do the right thing.

  - Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130122213221.ga16...@thunk.org



Re: Non-source Javascript files in upstream source

2014-05-02 Thread Theodore Ts&#x27;o
On Fri, May 02, 2014 at 09:20:02PM +0200, Bas Wijnen wrote:
> 
> 1. Do we need to check that generated files which we don't use are actually
>generated from the provided source?  Main example here is a configure file
>which gets overwritten during build.

For the record, the reason why I ship a configure as well as a
configure.in file with e2fsprogs is simply because I don't trust that
an arbitrary version of autoconf won't break with what I have in my
configure.in.  And I don't want to trouble-shoot some random Gentoo's
user's version of autoconf --- as far as I am concerned, the autoconf
maintainers have provided no guarantees of stability between versions,
at least none that I am willing to trust.

So I am only going to ship configure as generated by a version of
autoconf that I have personally tested as working.  And there have
been times in the past when I've simply kept an older version of
autoconf because the current version of autoconf was busted as far as
I was concerned, and I didn't have time to trouble shoot the damned
thing.

If someone starts complaining that I shipped a version of configure
that corresponded to autoconf version N, and sid just uploaded N+1,
and therefore my configure doesn't match with my configure.in, and
that's therefore a DFSG violation, I'm going to be really annoyed.

Heck, when autoconf has been busted, there have been times that
modifying the configure script directly *was* my preferred form for
making modifications

*Especially* if debian stable, testing, unstable, and experimental
might theoretically have completely different versions of autoconf,
making it fundamentally impossible to guarantee that configure is
"exactly generated" from the version of autoconf in all releases of
Debian.

There is such a thing of trying to adhere to the DFSG to the point of 
insanity

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140502234708.ga11...@thunk.org



Re: use of RDRAND in $random_library

2014-06-12 Thread Theodore Ts&#x27;o
On Thu, Jun 12, 2014 at 10:19:37AM -0700, Russ Allbery wrote:
> I've never seen a convincing argument that the kernel /dev/random is
> likely to be *less* secure than the hardware random number generator.
> It's either more secure or the same level of security.  Given that, it's a
> risk analysis, and the fact that we have absolutely no idea what the
> hardware random number generator is doing, it would be quite possible to
> insert a mathematical back door into it, and there's no way to audit it, I
> understand why people want to put a software randomization layer that we
> *can* audit in front of it.

One thing to worry about is what happens if the software library (if
it is implemented as a shared library) gets modified by a bad guy.
One thing a really paranoid program can do is to mix (via XOR, for
example) the output that it gets from /dev/urandom or this
cryptorandom shared library with RDRAND.

That way, if Unit 61398 tries to hack the cryptrandom shared library,
there's a fail safe (assuming they haven't hacked Intel's internal
systems to introduce a back door --- if anyone has done that it's much
more likely to be the NSA :-) since the PRC presumably wouldn't be
able to hack RDRAND.

The reason why /dev/urandom combines RDRAND with other sources of
entropy is something that is something that userspace programs can do
as well, especially if it's as easy to do as grabbing entropy from
RDRAND if it happens to be available.  Even if /dev/urandom is
bug-free(tm), I can't guarantee that the bad guy hasn't hacked your
kernel --- or worse, hacked whoever is building and uplodaing the
kernel .deb's to the upload queue (or hacked someone on the ftpmaster
team --- remember, the NSA hunts sysadmins :-).

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140612180609.ga24...@thunk.org



  1   2   3   >