Re: beginners question ?

2002-10-28 Thread Bob Proulx
Lars Segerlund <[EMAIL PROTECTED]> [2002-10-28 17:14:18 +0100]:
> 
>  I'm just starting to use gnu autotools, and I have some small
> problems, I have figured out how to build in some subdirs and to
> have resonable include paths, but how do I link with X11 , I'm using
> automake and autoconf and have a subdir which contains the source
> and an Makefile.am something like this:
> 
>  src/Makefile.am
> 
>  
> 
> INCLUDES= -I$(top_builddir) -I$(top_builddir)/lib

I believe those should be top_srcdir and not top_builddir.  Using
configure you may build in a completely different directory from your
source and then you would notice this as a problem at that time.

  INCLUDES= -I$(top_srcdir) -I$(top_srcdir)/lib

> bin_PROGRAMS = prog
> prog_SOURCES = prog.c prog2.c prog2.h.
> 
>  And I want it linked with -lX11 ??
>  Do I add to this file or the toplevel Makefile.am or toplevel 
> configure.ac ?

Use LDADD to add libraries to your link phase.

  LDADD = -lX11

Frequently this is:

  LDADD = ../lib/libmystuff.a -lX11

The local library specification is always relative, as opposed to the
$(top_srcdir) above, so that it operates out of the build area.

Bob





Re: [Fwd: Re: X11 and configure.in]

2002-10-30 Thread Bob Proulx
Lars Hecking <[EMAIL PROTECTED]> [2002-10-30 10:36:09 +]:
>   Well, I'm using AC_PATH_X but I don't know how to use this in the 
> Makefile.am , I have tried to get it to work with :
> 
> LDADD   = -lX11 -lm -L$(x_libraries)

For one thing you need to place all of your -L options before your -l
options.  The list is scanned left to right.  You need to tell it the
directories to look in first and then the libraries to link with
second.

Bob





Re: weird flags set by configure

2002-11-04 Thread Bob Proulx
[EMAIL PROTECTED] <[EMAIL PROTECTED]> [2002-11-04 12:43:48 +0200]:
> Small question:
> 
> Why do my autoconf-generated configure scripts automatically fit in
> debugging info "-g -O2" (or -g if I ask nicely?)
> When I compile release versions of my code, how do I (elegantly) ask
> automake, or rather autoconf (through autoconf.in) to omit this from
> CXXFLAGS?

GCC is viewed as being able to debug (-g) and optimize (-O2) at the
same time without any detriment.  Then if you want to strip the binary
to remove the debug information you can.  The make target to install
and strip is 'install-strip'.  Therefore the default is generally
good.

If the compiler is not GCC then it only adds debugging (-g) and stops
there since most compilers can't do both at the same time.

If you want to change this you can set CXXFLAGS at configure time.

  CFLAGS=-O CXXFLAGS=-O ./configure

> The content of this email and any attachments are confidential and
> intended for the named recipient(s) only.
> 
> If you have received this email in error please notify the sender
> immediately.
> Do not disclose the content of this message or make copies.
> 
> This email was scanned by eSafe Mail for viruses, vandals  and other
> malicious content.

Those quasi-legal scare footers are both worthless and annoying.
Please try to convince your company to remove them.  Alternately send
mail from other accounts which do not have them.

Bob





Re: weird flags set by configure

2002-11-07 Thread Bob Proulx
Earnie Boyd <[EMAIL PROTECTED]> [2002-11-07 11:39:37 -0500]:
> [EMAIL PROTECTED] wrote:
> >Otherwise, I'm also subscribed on the list, so no need to CC me in every
> >post :-)

Then set M-F-T in your postings.

  Mail-Followup-To: [EMAIL PROTECTED]

> It is a function of the mail client "Reply-All" event to add you in the 
> distribution of the response.  If you don't wish personal copies as well 
> as list copies then it is upto you to set the Reply-To header.

Actually M-F-T is the preferred method, although only a defacto
standard and not an official one.

  http://cr.yp.to/proto/replyto.html

If a user sets Reply-To to the list then the same problems as a list
setting Reply-To ensue.  The Reply-To should go directly to the user
and not to the list.  You would be surprised by the "interesting" list
replies which people thought was going to be private which have been
seen on lists because of this.

  http://www.unicom.com/pw/reply-to-harmful.html

Bob





Re: making script executable?

2003-02-03 Thread Bob Proulx
Simon Richter wrote:
> 
> $(SHELL)  ?

[Drifting off topic...]

Does that mean that a SHELL=/bin/csh user will run the script with csh
and a SHELL=/bin/zsh user will run the shell with zsh?  Wouldn't it be
better to use a predictable and most likely a standard shell for both
users?

  /bin/sh "your sh script here"

Bob



msg06523/pgp0.pgp
Description: PGP signature


Putting intermediate files in the distribution?

2003-04-02 Thread Bob Proulx
I am autoconfiscating a moderately large legacy project.  A previously
existing methodology in the project is to create a large number of .c
and .h files by generating them with a script from a template.  I have
created custom rules to do this and all builds fine.  I originally put
the generated files into CLEANFILES since they are generated files.

But upon a clean build I notice that it takes longer to run the
scripts to build those .c and .h files than it takes to compile the
entire rest of the project!  Yes the scripts are slow and I can
probably speed them up.  But for now I just want to work around the
problem and put that task off until later.  The files are architecture
independent and produce identical files on any platform.  Therefore I
would like to include those into the distribution in order to save the
rebuild time every time I recompile from scratch.

I have been thinking I should put the generated .c and .h files into
both EXTRA_DIST and MAINTAINERCLEANFILES.  Then I could force a
rebuild of those with maintainer-clean when I really want to force
rebuild them.  They would get normal dependency management and be
rebuilt when needed based upon their dependencies changing.  But they
would avoid being rebuilt gratuitously.  I have tested this and it
seems to do what I want but seems questionable to me.  I was looking
for direction.  Does this scheme sound right?

For comparison how are lex and yacc intermediate .c and .h files
handled?  I think my problem is similar should be handled similarly.

I will include an example structure at the end of this message.  It is
functional and representative of the problem.  And perhaps someone
will see obvious improvements to which I am blind.

Thanks
Bob

Here is an example:

EXTRA_DIST = file1.d file1.h file1.c ../../include/file1.h ../../src/file1.c
MAINTAINERCLEANFILES = file1.h file1.c ../../include/file1.h ../../src/file1.c

all: $(MAINTAINERCLEANFILES)

file1.c: file1.d
 a_script file1.d

file1.h: file1.d
 a_script file1.d

../../src/file1.c: file1.c
cp file1.c ../../src/

../../include/file1.h: file1.h
cp file1.h ../../include/




Re: Putting intermediate files in the distribution?

2003-04-02 Thread Bob Proulx
Paul Brook wrote:
> > I have been thinking I should put the generated .c and .h files into
> > both EXTRA_DIST and MAINTAINERCLEANFILES.
> 
> Putting the files into BUILT_SOURCES should do what you want. This includes 
> them in the distribution, and removes them when you do "make 
> maintainer-clean"

Thank you for suggesting BUILT_SOURCES.  I did not know about that
possibility.  That automatically adds them to the 'all' target, which
is nice.  Except it does not quite work by itself. :-) Those files are
not included in the distribution.  When I tried that route I needed to
put them into the distribution to which I used the (undocumented?)
DIST_SOURCES list.  Actually used the EXTRA_DIST list to stay on the
documented path.

I am thinking my original proposal is still best.  Use EXTRA_DIST and
MAINTAINERCLEANFILES only.  Because otherwise using BUILT_SOURCES I
need both of those plus I need BUILT_SOURCES and DIST_SOURCES three of
which have identical content.  But oh, wait, I can definitely use one
variable in the other lists to simplify things.  Leaving off the build
targets this seems to work pretty well, for my notedly ugly real world
legacy support case.

BUILT_SOURCES = \
  ../../src/file1.c ../../include/file1.h \
  file1.c file1.h

EXTRA_DIST = \
  file1.d \
  $(BUILT_SOURCES)

MAINTAINERCLEANFILES = $(BUILT_SOURCES)

Thanks for the information.
Bob




Re: GNU Automake 1.7.9 released

2003-11-10 Thread Bob Proulx
Alexandre Duret-Lutz wrote:
> > Charles Wilson writes:
>  Chuck> What is this "Autoconf 2.59" of which you speak? I saw this
> 
> I'm using the AUTOCONF-2_59 tag from CVS.  I didn't know it
> hasn't been announced yet.  All I can say is that Akim is away
> today and tomorrow, so you'll have to wait if you want an
> official tarball.

On Tue, 04 Nov 2003 15:57:52 +0100 Akim Demaille wrote:
> The Autoconf is happy to announce its release 2.58.  For the time

Since today is Mon, 10 Nov 2003 10:59:23 -0700 it seems very quick to
have another release beyond 2.58 already.  This is looking more and
more like a typo in the tag line.

Bob





Re: config.guess and freedom (was: 1.8 and mkdir_p)

2004-01-13 Thread Bob Proulx
Harlan Stenn wrote:
> The good news and bad news is that your position is a POLICY decision.
> 
> I am talking about a MECHANISM tool.

Agreed.  But it is not a mechanism of automake.  Nor should the
autotools support it since it embodies a diametrically opposed
philosophy from the one the autotools supports.  A table driven method
of system identification would be in opposition to the design
architecture of the autotools.

> Well, in the old days we used sysIII and sysV in various incarnations to
> avoid coding a different value for every reseller of those OSes.  Simliarly
> for the different BSD releases.

And those days of #ifdef SYSV were terrible days for portability.  You
say resellers as if they were truly the same (and some were), but most
were true forks.  Unfortunately it sounds like you want to recreate
that environment again today.

I still live on one of those OSs which is similar, but slightly
different, from other systems.  Porting software to it is always a
problem.  Especially when someone has a table of systems to determine
whether to call /bin/grep or /usr/bin/grep or whatever and does not
have HP-UX in the table.  I am always hacking on people's misguided
attempts at portability.  And life is interesting all over again with
HP-UX on ia64.

> If the releases are all that similar, why not use:
> 
>  i686-GnuLinux-*
> 
> as your test, and provide the "popular" distributions in the 3rd field?
> 
> The "magic" command has a large database of selections on it; using this
> sort of mechanism should greatly ease the burder on the config.*
> maintainers.

That sounds like the architecture and philosophy of imake.

Bob




Re: 1.8 and mkdir_p

2004-01-14 Thread Bob Proulx
Harlan Stenn wrote:
> > > I think you are missing my point.
> > > The information I am talking about is used for *runtime* decisions - very
> > > likely in a script that is in a shared directory used by many different
> > > architectures.

If for use at runtime then config.guess is very poorly suited.  Do you
really want to run the compiler at every run?  It is a little slow.
On some systems it runs the assembler.

> > Oh, well, config.guess isn't designed for that -- it's for compile time
> > decisions.

In relation to my above comment, clearly for compile time decisions
running the compiler makes a lot of sense.

> You are clearly joking!  I am not saying that I want to run config.guess as
> part of every shell RC file. I am saying the information that *should* be
> returned by config.guess (in its original spec) are sometimes needed for
> runtime decisions in a variety of places.

Uh, how does a runtime program obtain the "information that *should*
be returned by config.guess" without actually running config.guess?

> > uname -s, test -x /bin/rpm, test -x /bin/dpkg
> > are probably what you're after.
> 
> Not at all.

I have a heterogeneous environment and I use runtime tests such as
'test -f /some/file' often.  I also use 'somecommand --someoption' and
check the return code often.  It works very well.  This style of
coding is a single source style which works on different operating
systems without resorting to trying to enumerate all possible
configurations of them.

> I am talking about problems that you apparently have never had to really
> solve.

Hmm...  I have a large number (is >2000 machines of different types
large?) of machines in my lab.  I am willing to guess that I have had
to deal with many of the problems which you are about to propose as
examples which cannot be solved without using a lookup table.  Perhaps
I or others can suggest working alternatives to doing a table lookup
for your problems as well?

But this is clearly getting offtopic for automake.  This would be more
appropriate for the infrastructures[1] mailing list.

Bob

[1] http://mailman.terraluna.org/pipermail/infrastructures/




Re: find a header file

2004-02-04 Thread Bob Proulx
Yanfeng Zheng wrote:
> There was a problem when i compiled my program on red hat linux
> 8.0. The error message was that the header file  was
> needed. Where can i get  file? Thanks a lot.

Your question has nothing to do with automake.  Why ask it here?

There is no standard header .  If someone is including it
in source code then they are not expecting that program to compile
anywhere but on the author's system.  Some systems use a header file
of that name internally to define things which make sense to that
particular library.  But including the code from one library onto a
different system would not make sense and would not help you compile
the code.  It would just cause a cascade of more errors.

If you have source code which is including  then try to fix
and improve it by removing that include and replacing it with standard
ones.  Basically remove it, then compile with prototype checks enabled,
or use lint, and include any header files which your code needs to be
completely prototype type safe.

Bob




Re: about requiring Perl 5.6 in Automake 1.9

2004-02-08 Thread Bob Proulx
Alexandre Duret-Lutz wrote:
> Perl 5.005_03 will be 5 years old next month, and supporting it
> is becoming painful.

I sympathize.  But actually the problem is not the age of the old
version of perl is but rather youth of the new version.  The new
version has not yet propagated yet.  But let me propose a way to use
the newer perl to make development easier and still make things work
for the stable production environment community.

> I'm considering dropping support for Perl 5.005 in the future
> Automake 1.9, and require at least Perl 5.6.  Perl 5.6 will be 4
> years old next month, so it does not sound like asking for the
> moon.
> 
> How many people would be annoyed by this?  Is there any reason
> why this would be a very bad idea?

If a newer automake requires /usr/bin/perl to be a newish version then
as Guido mentioned I think it would add to the problem of projects
seeing that as a hardship and not moving forward.  They would need to
upgrade /usr/bin/perl which tends to break things.  But if you avoid
/usr/bin/perl and instead use a configurable newer version from PATH
then a developer can install perl over on the side somewhere.  That
would mitigate the problem of using a newer perl on systems that don't
want to be upgraded at the system level.

Question:  How would you feel if the installed automake were to use
perl from PATH instead of the hardcoded in path?  Could this be made
available as an option?

  #!/usr/bin/env perl-5.6

Then it would be a simpler matter for a developer to install perl off
to the side of the system without any risk that anything on the system
depending upon the /usr/bin/perl will be affected.  It still means
that a developer would need to build a newer version of perl.  But
presumably if they are using automake then are a software developer
and can handle it.  (A false assumption in my personal experience.
But I will still go with it because they *should* be able to do so.)

> I know some people are still using 5.005_03.  We'd have never
> noticed the aforementioned breakages otherwise.  However I do
> not know why they do.  Is there any reason why upgrading to
> newer Perl versions would be undesirable?

I am one of the folks with a large number (>2000) systems all running
perl-5.005_02.  [That is right, _02 not _03.  But unless you count the
bug fixes there is little difference.  And then shortly _03 came out
and we were already stuck on the previous _02.]  Getting that upgrade
through was as painful as trying to get concensus among a large group
of people.  Ah, now I think you are seeing the problem.  Guido Draheim
posted with similar information and the culture of his environment
seemed similar to the culture of mine.


The problem for us is SWIG (simplified wrapper interface generator).
If it were harder to create perl modules then this would not be a
problem. :-) Folks in the lab compile in random modules of their own
authorship into perl.  If perl is upgraded then all of those compiled
modules break.  Since these are all homegrown I have no way to
inventory them.  Some of the authors have moved on leaving binary only
code behind.  They are stashed here and there and everywhere.  And no
one will own up to having them until you actually have upgraded.  At
which time many things will be broken.  At which time the lab manager
will be over at my desk asking me why the project schedule is suddenly
going to slip.  This is not a happy time.  And since various projects
are always in different schedule phases there is never a time
available in which perl may be updated without affecting someone.  A
completely circular trap.


There is a way to break the circular trap.  It is off topic for this
list.  But in summary '#!/usr/bin/perl' causes the problem.  If
instead '#!/usr/bin/env perl' were used then each project can pull
perl from their NFS tool server and move independently of each other.
I am pushing things this direction.  But I imagine others may be still
caught in the trap.  This is likely an issue more predominant in the
stable production environments typical of the commercial unix vendor.

Bob




Re: Problem with configure

2004-02-13 Thread Bob Proulx
Priit Voolaid wrote:
> i don't know is this the right place to ask my question. If not, sorry.

[EMAIL PROTECTED] would have been better than [EMAIL PROTECTED]  This
has little to do with automake.

> I want all the necessery files to go in /opt directory, so i execute
> configure with --prefix=/opt.

More typically you would put them under a package directory when
installing into /opt.  Something like this.

  --prefix=/opt/package \
  --mandir=/opt/package/share/man \
  --info=/opt/package/share/info

> All good, configure runs wiht no problems.
> But after compilaiton 'make install' fails to copy everything to its
> right place, under /opt dircetory.
> After inspecting configure generated Makefile, i've found the problem. To
> demonstrate what i mean, heres some lines from example Makefile:
> 
> # Install directories
> PREFIX= --prefix/opt
> prefix= --prefix/opt
> exec_prefix=${prefix}
> BINDIR= ${exec_prefix}/bin
> INST_DIR=   ${prefix}/grass5
> 
> The PREFIX is defined as --prefix/opt instead /opt. Why is that? Where can
> i search for source of this problem?

That is definitely not what is expected.  And because of that I think
that it was invoked differently than you think.  I would try it again
carefully because I believe it should work.  That or the author put
something strange in the configure.ac file.  The autoconf generated
scripts are very well debugged in that area.  In any case you will
need to do some debugging of it.

Bob




library name enforcement?

2004-05-02 Thread Bob Proulx
Older versions of automake allowed arbitrary library names.

  noinst_LIBRARIES = foo.a

Recent versions of automake now complain about this naming.

  Makefile.am:2: `foo.a' is not a standard library name

I would normally like the lib naming but in this case I am retrofiting
an existing project and others disagree.

I searched the documentation but I could find no way to revert to the
previous behavior.  Any hints?

Bob




Re: library name enforcement?

2004-05-03 Thread Bob Proulx
Dale E Martin wrote:
> Bob Proulx wrote:
> >   Makefile.am:2: `foo.a' is not a standard library name
> 
> Does this work?
> foo_a_LDFLAGS=-module

No.  I get the same result.  I am glad it did not work.  It would have
been just too strange.  Thanks for making the suggestion just the
same.

Bob




Re: excessive bounces

2004-05-25 Thread Bob Proulx
Jay West wrote:
> You should have sent this to the list owner/admin, not the list.

Yes.  But the list owner for the automake list is gnulists-ownrr at
gnu.org, which is to say, effectively nobody.  The list is really
running entirely on inertia.  For example there are over a hundred
messages in the hold queue awaiting attention of the list owner from
four months ago.

Lars Hecking wrote:
>  Most likely, the same thing happens to OP that happens to me: the lists
>  run at gnu.org are crawling with spam, and a good number of them get
>  rejected somewhere between gnu.org and OP's final destination. After a
>  certain number of rejections, $LIST_MEMBER's subscription is disabled
>  automatically.

I see that too.  Here are some things that can be done about it.

Whitelist monty-python.gnu.org in your access lists.  Never reject
mail to it.  If you want to block spam at the MTA level (always a good
thing) then discard messages instead of rejecting messages from
monty-python.gnu.org.  Generally for viruses I am discarding and for
spam I am rejecting with the exceptions being known mailing list
servers.  I recommend spamassassin deal with what comes through from
the mailing lists.

Alien9 wrote:
> This is very frustrating, I'm running linux, don't have any viruses,
> and I don't spam people, I hardly ever sent a mail to this list...

Your address is @users.sourceforge.net.  That is most likely the
machine which is bouncing the messages.  As stated there is a huge
amount of spam from gnu.org and if it is rejected then it looks like
your account is rejecting mail.  It is not usual for large mailing
list servers to get listed in the various RBLs.  sf.net does use RBLs
and so that may also be a source of the problem.

  http://sourceforge.net/docman/display_doc.php?docid=6747&group_id=1#top

You might be better off receiving mail directly on your server.  Then
you could whitelist monty-python.gnu.org which sends the spam, I mean
mailing list messages.

It is also possible that in the path between sourceforge.net and your
final mailbox rejections are taking place.  Check your spam filtering
and ensure that you are not generating rejects to spam or viruses.  If
those rejections are getting back to the mailing list software on
gnu.org then it will appear as if your account is bouncing mailing
list message.

Hope that helps,
Bob




Re: library name enforcement?

2004-05-25 Thread Bob Proulx
Alexandre Duret-Lutz wrote:
> >>> "Ralf" == Ralf Corsepius <[EMAIL PROTECTED]> writes:
> 
>  Ralf> On Mon, 2004-05-03 at 06:52, Bob Proulx wrote:
>  >> Older versions of automake allowed arbitrary library names.
>  >>
>  >> noinst_LIBRARIES = foo.a
>  >>
>  >> Recent versions of automake now complain about this naming.

Sorry for not being explicit about the particular versions.  I did not
know at the time which version did which behavior.  All I knew was
that it had changed and had made an assumption that these changes
would have been known.  But my assumption was flawed.  It had changed,
but not explicitly.  After researching this further I finally have
some more information.

>  Ralf> Recent? Well, AFAICT this (mal-) feature is in automake for years.
>  Ralf> I wish it had never been introduced :(

I saw it relatively recently.  I was using automake-1.4d on HP-UX.

> I've dug some interesting bit of history out of the CVS
> repository.

Wow.  Interesting history.  Thanks for sharing it.

> I'm quite surprised by this thread, though.  First I don't know
> how Bob could have possibly used arbitrary library names (a
> patched Automake, maybe?)

I had automake-1.4d installed on HP-UX and people in my lab were
trying to port a legacy application to it.  The comment came to me
that it worked on our HP-UX but failed on our Debian GNU/Linux
machines.  Different versions with the Debian version much newer.  The
port has stagnated since.  I eventually updated our HP-UX machines to
a newer automake.  That sort'a solved the problem of them being
different because then both platforms behaved the same.  :-/

Getting back to the problem after a delay and I decided to post the
question to the list.  I had assumed it would be well known.  I did
not expect that this check had been in place for so long.

I walked through every previous version of automake that I had
installed in order to determine how this was "working" previously.  I
was starting to doubt my own sanity after a while.  But when I finally
got back to automake-1.4d I was able to recreate the behavior.  That
version does not seem to check that libraries have a lib*.a pattern.

I looked at the code and I do not know why it is not complaining.  I
can only guess that it is a bug which is causing the check to be
avoided.  But from the code that is certainly not the intention.  So I
think this was a bug in 1.4d which by chance gave me a "good"
behavior.

> Then I confess I don't really realize to what extent this
> restriction is annoying.

When creating new projects it is not a problem.  When porting
applications normally it is not a problem.  But in my case I have a
relatively large (>625K ncss/sloc) project which is using an existing
build system (spms+mkmf+zillions of scripts) and I was trying to
retrofit it in place.  It currently creates libraries without any lib
prefix.  These are linked in later using a full name and path.

If I can make both build systems work side by side for a transition
interval I can probably move the project onto the autotools.  But if I
can't then it is unlikely that I will convince the powers that be that
things should be dismantled and reassembled under a different one.  So
for me it is purely a matter of marketing this to others without any
prior open source or free software experience.  Think corp programmers
who have spent their entire career within the private sector.  They
expect things to be difficult and pecular to a project.  But if you
show them a better way they will use it.

My preference would be that if the automake options included 'foreign'
that the naming conventions were relaxed.  This is only for static
archives and not for shared objects.

Bob




Re: package creation

2004-08-15 Thread Bob Proulx
Gustavo A. Baratto wrote:
> Basically, what I looking for is a 'make package' rule, where all
> the files that would be installed, plus an install script could be
> tarred up together, so we can copy the tarball to many diferent
> servers, unpack it, run the script, and the files get installed
> without the need of copying over or nfs mounting the whole source
> code? Not a fancy full blown package like rpm.

What you want is very easy to do.  But really a package manager is
much the better way to go.

  make install DESTDIR=/var/tmp/pkg-image-area
  (cd /var/tmp/pkg-image-area && tar cvzf /var/tmp/pkg.tar.gz .)

> Maybe the install script would have an uninstall feature as well (I
> have seen a thread requesting this).

That is much more difficult.  There are complicated issues there.

Bob




Re: can't login to list

2004-09-05 Thread Bob Proulx
r43233 wrote:
> I have subscribed to the list but can't login. I have registered and 
> recieved confimation. However during confirmation i forgot to enable 
> cookies and the system won't allow me to login. I have enabled cookies 
> and exited mozilla and restarted it but still can't login.
> regards
> anoosh

For what purpose do you wish to "login" to mailman?  Isn't being
subscribed enough?  Did you want to unsubscribe?

Loging into the mailman web interface should not have anything to do
with cookies.  Forgive me for assuming but I think you must have just
forgotten your password.  On the admin page here:

  http://lists.gnu.org/mailman/listinfo/automake

At the bottom of the page:

  To unsubscribe from Automake, get a password reminder, or change your
  subscription options enter your subscription email address:

Select [Unsubscribe or edit options] and it will take you to a
password reminder page.  Fill in your address and it will send you
your present password by mail.

If this is not clear send me mail offlist and I will try to help you
with whatever mailing list operation for which you are having
trouble.

Bob




bug-automake address (was: aclocal.test failure)

2004-11-13 Thread Bob Proulx
Alexandre Duret-Lutz wrote:
> Please do direct bug reports to [EMAIL PROTECTED], a spurious
> failure of CVS Automake is noise for [EMAIL PROTECTED]

There is no mention of bug-automake on the web page.

  http://www.gnu.org/software/automake/

Could one be added?  Here is a proposed patch against the currently
http visible sources.

Bob
--- Automake.html.orig  2004-11-13 18:24:10.589509275 -0700
+++ Automake.html   2004-11-13 18:29:58.512182705 -0700
@@ -37,11 +37,18 @@
 For more information, read the Automake documentation.
 
 
+To report a bug in Automake, you should send a bug report to
[EMAIL PROTECTED]
+More information on the mailing list including list archives can be 
+http://lists.gnu.org/mailman/listinfo/bug-automake/";>found here.
+
+
 The mailing list [EMAIL PROTECTED] is for discussion of Automake,
 Autoconf, and other configuration/portability tools (e.g.,
 libtool).  Write to
 mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]
 if you want to join.  Volume is typically low.
+
 
 The automake mailing list (including archives) is available at 
 http://lists.gnu.org/mailman/listinfo/automake";>


Re: automake during development

2004-11-18 Thread Bob Proulx
Jonathan wrote:
> My team is trying to use automake for a C++ project we are developing.
>  The project design has been changing rapidly and we are finding that
> keeping the list of source files in Makefile.am up to sync with the
> contents of the directory is very inconvenient.
> 
> An ideal solution would generate an object file for each .cpp file in
> the directory

You might want to look at an old BSD tool called 'mkmf'.  It does
exactly what you are talking about.  It builds using sources that
happen to be found in the same directory.

Having used mkmf for many years on various projects let me caution
against using a dynamic source gathering system.  It is very easy to
make mistakes using it.  I much prefer the explicit listing of files
in the Makefile.am.  When I list new files there I also remember to
check them into revision control at the same time.  With projects
using mkmf it is very easy to forget a file when adding it or to
forget removing a file.  This leads to other team members finding your
bug and chasing you down with dogs and torches when you break their
build.

Bob




Re: Disabling optimization

2004-11-18 Thread Bob Proulx
Bob Friesenhahn wrote:
> Andrew Suffield wrote:
> >>>What you're all trying to say is this:
> >>>
> >>>CXXFLAGS="-g -O0 ${CXXFLAGS}"
> >>Nope, this prevents overriding CXXFLAGS from the environment.
> >
> >It does not. I do it all the time.

On HP-UX:

  aCC -O0
  aCC: warning 901: unknown option: `-0': use +help for online documentation. 

I hate it when developers do things like that.  It frequently means I
need to hack the configure script to build it.  (And I do mean the
configure script.  Usually the developer is using a version of
autoconf that I can't replicate.)  Of course in my example case it is
only a warning.  But in general you can't count on that.

> How does the user portably remove/override the -g and -O0 options?  It 
> seems that you are depending on the user's compiler to support a way 
> to subtract from existing options.  You are also expecting that the 
> user's compiler supports -O0 and doesn't simply exit.

Agreed.

Bob




Re: Configuring automake says autoconf 2.58 or higher needed. Have au toconf 2.59 installed. What is/goes wrong?

2005-01-16 Thread Bob Proulx
Thomas Dickey wrote:
> Ralf Corsepius wrote:
> > I normally respond CC:-ing the reporter on auto*.gnu.org lists, because
> > they tend to be unreliable. Not have done so in this case was just an
> > oversight.
>
> otoh, when I do that, I usually get 2-3 complaints from people stating 
> that I shouldn't (ymmv).

On lists that you must be subscribed to post and are discussion lists
then I think people do not expect a CC and often complain when they
are CC'd.

But on the bug- lists there is no assumption that you are subscribed.
It is an open mailing list and anyone may send a message there.  This
is a long standing policy.  (One that has created some controversy
because of the increase in spam I might add.)

If I know someone is subscribed I usually remove their address from
the To list.  If not then I leave it on.  I always leave the original
posters address on the list if I do not recognize it specifically.  If
someone CC's me I try not to complain even though I would like them
not to.

Bob




Re: CCing list replies (was: Configuring automake says autoconf 2.58 or higher needed. Have au toconf 2.59 installed. What is/goes wrong?)

2005-01-16 Thread Bob Proulx
In this case I looked at the list of people in the discussion, knew
they were all subscribed, and intentionally mailed only to the list. ;-)

Andreas Schwab wrote:
> Ralf Wildenhues <[EMAIL PROTECTED]> writes:
> > This is not addressed at me, but I also had to learn the hard way
> > that
> > - some gnu.org lists but not all automatically exclude subscribers if
> >   they are listed in To: or Cc:.
> 
> This is customizable, see the mailman options page.

However, that feature does not work well.  It was designed to allow
cross-posting to several mailing lists and to avoid sending a
subscriber of multiple lists the same message.  That part works.
[Basically crossposting to several different mailing lists is
problematic and should be avoided in general.]

This feature is bad for people like me that file mailing lists into
separate folders.  I would only get one copy in one of the folders in
that case.  (I think crossposting to -announce and -devel is good, for
a counter example.  No discussion should happen on -announce.)

But the feature does not work at all if I am on the To: list and
subscribed both.  In that case I would not get the copy from the
mailing list.  In that case I would fail to file the message into the
appropriate folder because it would lacking the mailing list headers.
I think it is a bad mailing list feature.

Bob




Mailing list CC's

2005-01-16 Thread Bob Proulx
Here is a message I recently sent to an individual after observing
that they never CC'd the original poster.  This seemed topical after
reading today's discussion.

Bob


Thanks very much for answering questions on the mailing lists.  It
is appreciated.  However I noticed today that you have not been CC'ing
the original poster.  I looked back through several of your posts.  So
I thought I would drop you a note and let you know that it is very
likely that the people you are replying to are not receiving your
followups.  Which is a shame since they are useful.

This is going to sound contrary to common sense on most discussion
types of lists.  Normally on a discussion list you would only send
your reply to the list and never include the address of the person you
are replying to since they are subscribed.  On discussion lists you
can assume that anyone posting there is subscribed to the list.
Sending them a CC means they get a duplicated message and that is not
appreciated by many on the net.  Certainly it is considered rude on
most discussion lists.

But on bug reporting lists such as the [EMAIL PROTECTED] lists there is no
assumption that the original poster is subscribed.  It is a cold drop
address that is advertised as an address that anyone can send bug
reports.  Therefore on the bug-* lists it is normal to include the
original poster in the address CC list.  Otherwise they won't see the
response at all.

However, anyone who replies to the original poster is certainly
subscribed since that is how they saw that message.  Therefore only
the original poster and the mailing list needs to be listed in
subsequent messages.  If you miss it is not a big deal.  On the bug-*
lists people don't get upset if they get an additional CC at this
point.  But if possible the address should be original poster and
mailing list only.

Normally I use the Mail-Followup-To: header when I reply to a poster
setting the reply to the original poster and to the mailing list.
After that anyone that replies with a mailer smart enough to respect
that header will do the right thing automatically.  Also, when I post
original messages to these lists I set that header as well to just the
list.  This is all automatic with modern mailers like 'mutt'.  So
while it might sound complicated I don't even notice it.

Hope that helps,
Bob




Re: libtool non standard library name

2005-01-27 Thread Bob Proulx
Jean-Denis Giguere wrote:
> pymod/Makefile.am:4: `_gtkmissing.la' is not a standard libtool library name
> [...]
> The pygtk-2.4.0 use this kind of configuration and it works well.
> 
> I would really appreciate if someone may point me a document where I can
> find more informations about this problem.

I brought that issue up on the list myself some time ago at a thread
starting here:

  http://lists.gnu.org/archive/html/automake/2004-05/msg00016.html

Currently automake is coded to explicitly require "lib" in the front
of the name.

Bob




Re: Automake and bison/flex sources

2005-02-07 Thread Bob Proulx
Oliver Boris Fischer wrote:
> my project contains of some bison and flex files. It seems so, that 
> automake will distribute the genrated c files.
> 
> Is this intended? How can I turn this off without abusing CLEAN_FILES 
> and co?

Yes, that is intentional.  The documentation for automake says:

 The intermediate files generated by `yacc' (or `lex') will be
  included in any distribution that is made.  That way the user doesn't
  need to have `yacc' or `lex'.

Note that if the source files are modified then the automake generated
makefile will attempt to regenerate those generated files.  But
normally in a distribution image the user would not be required to
have yacc or lex (bison or flex) installed because the generated files
already exist.  Only the C compiler would be required.

Bob




if 'missing makeinfo' then pkg.info is zero sized

2005-05-08 Thread Bob Proulx
It seems that if 'makeinfo' is missing from the system but that the
timestamp of the .texi file is newer than the .info file that the
.info file is removed and replaced with a zero size file.  I tested
this with automake 1.9.5.

I am trying to understand why what is happening is happening.  I was
led down this trail by problems with a Debian package that included a
.diff.gz file that patched both the .texi and the .info files.  Of
course patch does not preserve timestamps.  If patch ran fast then
these files had the same timestamp.  But there is a race and they
might not have the same timestamp.

In at least one case patch left the timestamps different.  That caused
the automake generated texi-to-info rule to run which removed the
target .info file.  The 'missing makeinfo' command ran and finding
makeinfo missing it ran 'touch $file' creating a zero sized file.
This was then packaged up.  Everything appeared successful.  I knew
that 'missing makeinfo' had been called but I also knew that the patch
also patched up the .info file directly so I expected the target file
to be okay regardless.  It was only later when I tried to install the
package that install-info failed because of the zero sized .info file.
Then I needed to peel the layers away until I could find out why.

I can see that both the texi-to-info rule and the missing script are
working together to keep going in this case by creating a zero sized
target info file.  So this is clearly intentional behavior.  But why?
It caused me a lot of work to debug why the package would fail the
installation.  It would have been a lot easier if the build had simply
failed due to a missing makeinfo.  Then I would have known right up
front the problem and fixed it directly.

Bob




Re: .DELETE_ON_ERROR ?

2005-05-10 Thread Bob Proulx
Stepan Kasal wrote:
> Makefile.am contains:
> 
> foo.h: foo.x
>   $(GENERATOR) foo.x >foo.h
> 
> But the GENERATOR command failed and I have empty foo.h.

Yes, because the shell redirection creates the file instead of the
generator.

> It would be nice if make deleted foo.h automatically, but this is not the
> historical practice.  GNU make enables you to change its behaviour by
> mentioning a target:
> 
> .DELETE_ON_ERROR:
> 
> Could Automake possibly add this target to all generated Makefile.in's?

Is your goal to limit the amount of Makefile.am editing you need to
do?  Because it would seem that it would be easy for you to add that
target to the makefile yourself since you are already adding the
generator lines.

How much control of the generator do you have?  You could make it a
wrapper for the real generator.  Have it delete the target on error.

If you modify your Makefile.am you can use a stanza like this to
detect the error and to remove the file in this particular case.  Then
pass an error up to make on the way out.

  foo.h: foo.x
$(GENERATOR) foo.x >foo.h || { rm -f foo.h ; exit 1 ;}

> I understand that the problem is that we want to write portable Makefiles.
> OTOH, this change doesn't prevent build from scratch with non-GNU makes.
> And most developers use GNU make, and this change can help them with their
> work.

The example above is portable to non-gnu make.

Bob




Re: back to directory dependencies

2005-06-28 Thread Bob Proulx
Baurzhan Ismagulov wrote:
> [This thread was started on Jun 7. I couldn't find the list archive --
> http://directory.fsf.org/GNU/automake.html doesn't mention any, and
> http://sources.redhat.com/ml/automake/2005-06/ says there were no
> messages sent this month. Which archive do you use?]

In the message headers is included this information:

  List-Id: Discussion list for automake 
  List-Unsubscribe: ,
  
  List-Archive: 
  List-Post: 
  List-Help: 
  List-Subscribe: ,
  

The list archives actually moved from what is listed but there is a
redirect in place to http://lists.gnu.org/archive/html/automake/ so
everything works.  There I found the original note.

  http://lists.gnu.org/archive/html/automake/2005-06/msg00028.html

Bob




Re: Atomic control of subdirectories with recursive automake

2005-07-25 Thread Bob Proulx
Brian wrote:
> I am in the planning stages of autoconfiscating a large project. The project 
> will have a top-level makefile.am  and then several 

Did you expect that to be a http web link?

> subdirectories which each generate an executeable and have a
> makefile.am.

Every directory will have a Makefile.am file and a recursive make will
be performed.  Okay.  Sounds fine.  That is all normal.

> A specific feature I have been asked for is the ability to jump into any of 
> the given subdirectories and run only that makfile, compiling only that 
> program.

That is a normal feature of automake generated Makefiles.  Only that
directory will be made.  However if that directory depends upon
another directory such as a library then that other directory will
need to be made first.  A top level make will traverse all
directories.

But what you are talking about there is just edit, make, edit, make,
in a single directory and that is all standard and works fine.

It is also possible to optionally configure automake to create only a
top level Makefile.  But that does not sound like what you want.  It
is optional, so just don't do it.

> Additionally, at configure time an option such as --enable-debug 
> should be available, and if set, should create an additional debug version 
> of each subdirectory using the same source files (with optionally different 
> flags to custom programs) and leave the intermediate object files
> behind.

That can probably be done and someone will suggest a way to do it.
But normally if that is what you want then you would use separate
build directories.  While in your top level project directory:

  mkdir ../project-optimized
  cd ../project-optimized
  ../project/configure CFLAGS=-O

  mkdir ../project-debug
  cd ../project-debug
  ../project/configure CFLAGS=-g

  mkdir ../project-profile
  cd ../project-profile
  ../project/configure CFLAGS=-p

This will build three different copies of your project, each with the
build flags you gave it.

> In some sample tests where the only source file was main.c and
> bin_PROGRAMS = hello helloDebug, I was left with only one main.o. Of
> course I couldn't have two, but there was no "autorenaming" so to
> speak.

By building in separate directories as in the above there is no need
to rename because the programs are built in different directories.

> To recap:
> 
>- Optionally make subdirectories individually 

Normal for recursive make configurations.

>- Optionally keep separate intermediate files of simultaneous regular 
>and debug builds

Build in separate directories.

> - Pass flags from configure all the way to individual debug
>builds (I know how to enable the flags, just now how to make sure
>they make it all the way down)

When passed to configure those flags will be configured into all of
the Makefiles below.  To change flags you configure with different
flags.  This is the output of AC_CONFIG_FILES and AC_OUTPUT.

Bob




Re: Atomic control of subdirectories with recursive automake

2005-07-27 Thread Bob Proulx
Brian wrote:
> I wanted to be a little more clear on this (since you brought up the idea of 
> subdirectories I have become keen on it)

Actually it was you who brought up the question of did it work in
subdirectories.

> Suppose I wanted to make two versions of hello - one in /src/hello and one 
> in /src/hello/debug. Both programs would use src/hello/main.c as their 
> source file. 

Build both in a different directory to the side.  Keep the source
directory pristine with only source files there.  Build both the debug
and the optimized versions in different, separate directories.

> Is this possible while using only the makefile.am  in 
> /src/hello ?

Your use of "" is terribly confusing.  What do you
mean by that?  It is not a URL.  Please explain.

In addition to the good online documentation that accompanies the
autotools there is also the good documentation available both online
here below and as a honest to goodness paper copy for pleasant
reading.

  http://sources.redhat.com/autobook/

Really you are to the point that you simply need to try it.  Just jump
in and try some small examples.

Bob




Re: 'make' reruns configure ?!

2005-09-05 Thread Bob Proulx
Kendrick Smith wrote:
> Can anyone tell me why an Automake-generated Makefile
> would rerun the 'configure' script when 'make' is invoked,

This would mean that the timestamps on the files indicate that you
have modified a source file such as modifying a Makefile.am.  Because
the Makefile.am is newer than the Makefile the tools think it needs to
be rebuilt.

Downstream users unpacking a distribution tar.gz file should never
need to run the autotools because the tar unpacking will set the
timestamps properly.  The timestamps will indicate that the project is
up to date.  The make program will not be triggered to run automake.

One way for users to trip into this condition is to copy the unpacked
directory without copying the timestamps.

  cp -r project-1.0 myproj-0.0

Because the cp did not preserve timestamps the result depends upon how
fast your copy happened and things like that.  Better to put the -a
option there and preserve all attributes.

  cp -a project-1.0 myproj-0.0

Another way for users to trip into this condition is to check all
files, even the generated ones, into a version control system and then
do an update from it.  If the VCS does not preserve timestamps then
this can also indicate by the timestamps that the tools need to be
run to bring things up to date.

And of course there are other tools such as dpkg-source that do not
preserve timestamps and can cause inadvertent timestamps skews.

> and whether there's a (possibly heavy-handed) way to disable
> this behavior? 

There is a whole section in the automake FAQ about the "missing"
script and about the "maintainer-mode" option.  Please read the
documentation on it.  (You may need to change the version if you have
a different version of automake installed.)

  info automake-1.9 FAQ

If you wanted a heavy-handed way of making sure that make thinks your
timestamps are up to date then you can always make all files the same
timestamp.  This will make everything appear up to date.  You can then
make clean and make normally.

  find . -type f -print0 | xargs -r0 touch --reference .
  make clean
  make

Bob




Re: make depend problem with hello_SOURCES = ${SRCDIR}/hello.c

2005-09-12 Thread Bob Proulx
Harald Dunkel wrote:
> Question about make depend:
> 
> If I set
> 
>   SRCDIR = ../src
>   noinst_PROGRAMS = hello
>   hello_SOURCES = ${SRCDIR}/hello.c

Shouldn't you be using normal VPATH?  That is, you are setting
hello_SOURCES = ../src/hello.c.  But I don't think you want to do
that.

What are you trying to accomplish there?  Everything else I will say
here is speculative and contingent upon this.

> in my Makefile.am, then make complains
> 
> Makefile:242: ../src/.deps/hello.Po: No such file or directory
> make[1]: *** No rule to make target `../src/.deps/hello.Po'.  Stop.
> 
> No indication about what went wrong. Even worse, config.status
> has created a weird directory '$(SRCDIR)'.

This is related to the use of ${SRCDIR} in your sources list.

> Would it be possible to generate a better error message here?
> 
> Or is there some option to let make generate the *.Po files
> including their subdirectories?

Try this in the source directory's Makefile.am file:

  noinst_PROGRAMS = hello
  hello_SOURCES = hello.c

That should work.

Bob




Re: release and test targets

2005-12-12 Thread Bob Proulx
Ralf Corsepius wrote:
> Baurzhan Ismagulov wrote:
> > release:
> > $(MAKE) CFLAGS=-O2 prefix=/usr sysconfdir=/etc localstatedir=/var
>
> You are miss interpreting automake's tasks. Packaging is not of
> automake's business.

The above technique really has nothing to do with automake and just
deals with make by itself.  It is obviously just a convenience
target.  But convenience targets are, well, convenient.

I will guess that what you are probably really objecting to, or at
least as well, is using the makefile to install things directly into
the system's live running directories of /usr and /etc.  I see that
and it makes me cringe.  Much better to use a package manager such as
dpkg or rpm for such tasks.  I would rather see that trigger a debuild
or an rpmbuild and create a real package instead.

> Your approach isn't much more but a short-cut to your personal and local
> practice and setup. It is wrong and therefore inapplicable almost
> anywhere else.

I agree it is inapplicable elsewhere.  But I disagree it is wrong.  If
you are the developer and want a short-cut target in your makefile and
it does not damage anything else then I believe that would be okay.
It adds convenience and removes nothing.

There are actually a lot of projects that add local project specific
targets to their makefile.  Few people outside the project know about
it because the public interface of 'tar xzf ...; ./configure; make;
make install' all work as expected and they never see the typing aides
internal to the development of the project.

Bob




Re: AM_MAINTAINER_MODE

2013-02-09 Thread Bob Proulx
Ineiev wrote:
> Russ Allbery wrote:
> >Another place where the default behavior frequently breaks is if one is
> >applying a patch to both the generated file and the source file, usually
> >because one explicitly *doesn't* want to re-run Automake (often because
> >there's some incompatibility with the latest version) and one has
> >carefully determined the right change to make to the generated file and is
> >also patching the source for documentation purposes.  If patch happens to
> >patch the generated file before the source file, make then tries to re-run
> >Automake and everything explodes.
> 
> If one is so careful, why not care to patch the source file before
> the "generated" file?

Sometimes we are using tools that do the patching for us and we have
no control over the order.  (For example quilt.  Others.)

But another question to ask is if that is the case why not simply
touch all of the files to the same time after the patching and before
the make?  That also forces everything to appear up to date too and
doesn't need AM_MAINTAINER_MODE to be added.

Bob



Re: AM_MAINTAINER_MODE

2013-02-09 Thread Bob Proulx
Russ Allbery wrote:
> Bob Proulx writes:
> > But another question to ask is if that is the case why not simply touch
> > all of the files to the same time after the patching and before the
> > make?  That also forces everything to appear up to date too and doesn't
> > need AM_MAINTAINER_MODE to be added.
> 
> Sure, that also works.  It just seems kind of silly to have to deceive
> make rather than just removing the make rules that one doesn't want.

hm... well...  It seems no more silly than patching in
AM_MAINTAINER_MODE to deceive the build system not to rebuild things
that have changed too.  Same thing but different.  (shrug) Except I
think putting in a recursive touch in the package build to be the much
simpler alternative.  Simpler is better when the result is the same.

(Although I don't do the recursive touch anywhere myself.  I
automatically update the autotools files in the package to the current
build system's version at build time.  That way support for new
architectures like the latest ARM come along automatically too without
needing to explicitly take action to do it.)

I know that you maintain some very large packages and some of those
undoubtedly have some seriously complex needs.  It is always hard to
talk about things in a general way when there are very specific
exception cases that cause obstructions.  There are always going to be
exceptions needed.  There isn't one size fits everyone.  But sometimes
there is only one size available.  When that one size available
doesn't fit that is bad.  But Stefano is explicitly saying that the
flexibility to do this will remain so we don't need to worry about the
bad case of having only one size that doesn't fit.

Bob



Re: subscription issue

2013-08-08 Thread Bob Proulx
Hello Rudra,

Rudra Banerjee wrote:
> I am a subscriber of this list, but there is no post delevered in my
> account. I also get a automake bounce notice, as 
> "Your membership in the mailing list Automake has been disabled due to
> excessive bounces ..."
> I tried to reply the mail as directed on the same day, but got:
> "- Results:
> Invalid confirmation string.  Note that confirmation strings expire"
> 
> Is it possible to delete my account and recreate it? I tried to delete,
> and failed.

In the future if you have questions concerning the mailing list
administration it is better to send your correspondence to the mailing
list owner.  Add "-owner" to the address.  In this case the mailing
list owner address is automake-ow...@gnu.org and that will reach the
humans who administer the mailing lists.

If your account is being disabled due to too many bounces it means
that your mail server is bouncing messages from the mailing list.
Mailman has an algorithm that it uses to determine if the account is
alive or dead.  One bounce every now and again won't unsubscribe you.
But if are more bounces than the threshold then it thinks that your
account is no longer alive and disables the address.

You can always unsubscribe and subscribe yourself at any time.  But if
email is bouncing then that will be the problem because you won't be
able to answer the confirmation messages.

Since this discussion won't be of interest to the members here I will
take the discussion offline and try to work toward a resolution.

Bob



Re: automatically showing test-suite.log on failure?

2018-09-21 Thread Bob Proulx
Bob Friesenhahn wrote:
> Karl Berry wrote:
> > However, this seems like it would be fairly commonly useful and easy
> > enough to do in the canonical test-driver script. So, any chance of
> > adding it as a standard feature? Any reasonable way of enabling it would
> > be fine, e.g., a flag that can be added to [AM_]LOG_DRIVER_FLAGS.
> 
> Take care since some test logs could be megabytes in size.

I want to put a point in that often users try to be helpful and email
that multi-megabyte log file as an email attachment to bug reporting
mailing lists that sometimes have a lot of subscribers.  This is very
painful on the network bandwidth usage.  And also every downstream
subscriber, perhaps on a metered cell modem data plan, pays the cost
of receiving it whether they can help with it or not.

If there is some way, clever or not, to encourange the user to
compress that log file before attaching it that would save a lot of
downstream pain.  When compressed that log file is a small fraction of
the original size and rarely a problem.

Just mentioning it to keep it in the brain cache when thinking about
improvements here.  :-)

Thanks!
Bob



Re: libdir on x86_64

2006-03-23 Thread Bob Proulx
Ralf Wildenhues wrote:
> * Guillaume Rousse wrote:
> > Am i supposed to manually set libdir according to build host to get
> > compliance with such constraint ?
> 
> Yes, you can specify --libdir at configure time.  Note for system
> installations you will usually have to set more options, for
> example --prefix=/usr --sysconfdir=/etc.  The idea of the GNU coding
> standards is that all packages allow to be configured in this way, so
> that you can adjust all packages to your system specific layout in the
> same way.

And note that not all systems use the biarch model of libdir becoming
lib64.  There are systems where 64-bit is native and the 32-bit flavor
is the guest lib32.  Also there is a multiarch model where the
directories name is lib/arch-os.  The configure script provides a
standard interface used by many different package implementations to
produce the desired filesystem layout.

Bob




Re: how to something from the configure script into one of the test programs?

2006-04-12 Thread Bob Proulx
Ed Hartnett wrote:
> To accommodate this on machines with disk quotas, I want to allow the
> user to specify to configure a directory in which large files can be
> created during testing. 

You might look into just having the user set TMPDIR (or through your
configure option) to the directory for holding large files.  Example:

  TMPDIR=/var/tmp mktemp -t myprog.
  /var/tmp/myprog.XXGWXbMk

By setting TMPDIR a lot of programs will already automatically use it
for temporary files.  It may allow you to avoid a lot of pervasive
changes through your code.

Bob




Re: Broken makefile given Autoconf version mismatch

2006-04-18 Thread Bob Proulx
Noah Misch wrote:
> Alexandre Duret-Lutz wrote:
> > I'm leery of assuming that Autoconf's version will always be at
> > this spot in the output of --version.  Sometimes people customize their
> > copy and tweak --version to reflect so:
> > ...
> > % gcc --version
> > gcc (GCC) 4.0.3 (Debian 4.0.3-1)
> 
> With respect to `--version' output, GNU Coding Standards state, `The
> first line is meant to be easy for a program to parse; the version
> number proper starts after the last space.'  Customizing `gcc
> --version' in this way certainly is common, but it does violate the
> Coding Standards.

I can see what is being attempted, to identify something as having
patches making it no longer strictly an upstream version.  That seems
admirable.  But perhaps the implementation could be improved.  Perhaps
instead of "gcc (GCC) 4.0.3 (Debian 4.0.3-1)" the output could join
the version with "gcc (GCC) 4.0.3.Debian-1" or some such?  Or at the
possibly it should be "gcc (GCC) 4.0.3 Debian 4.0.3-1" such that the
version after the last space is 4.0.3-1 and an intended representative
version number?

Bob




Re: dynamic dist?

2006-04-23 Thread Bob Proulx
Ralf Wildenhues wrote:
> Tyler MacDonald writes:
> > OK, so I might need something more portable than cpio... but the
> >"\s.*" part does serve a purpose; the MANIFEST file format allows for a
> >description of the file after whitespace. I guess I could do "[ \t]" or
> >something else instead of the \s.
> 
> Ah, I didn't think of that.  You cannot use \t either, but using a literal
> TAB inside the brackets works portably AFAIK. 

If you consider POSIX systems as being portable enough then using
the [:space:] character class should work pretty well.

  s,[[:space:]].*,,

Bob




Re: dynamic dist?

2006-04-23 Thread Bob Proulx
Hi Ralf,

Ralf Wildenhues wrote:
> Hi Bob,
> 
> * Bob Proulx wrote on Sun, Apr 23, 2006 at 05:35:23PM CEST:
> >
> > If you consider POSIX systems as being portable enough then using
> > the [:space:] character class should work pretty well.
> 
> Thanks to Mr. Solaris, no, I don't consider that portable enough.  ;-)

Hmm...  Well there is always:

  test -d /usr/XPG4/bin && PATH=/usr/XPG4/bin:$PATH

That might satisfy Mr. Solaris in this case.  It is a shame that he so
obstinately refuses to become more standard.  Sigh.

Bob




mailing list administrivia (was: aclocal 1.9.6 and drive letters)

2006-06-01 Thread Bob Proulx
Ralf Wildenhues wrote:
> * Andreas Büning wrote:
> > Some weeks ago I sent a mail to the automake-patches mailing list
> > but it never appeared there. Then I tried the bug-automake
> > mailing list. I checked the mailing list archives on lists.gnu.org
> > my posting never appeared.
> 
> That is weird, sounds like some spam filter gone too picky.
> Bob, Stepan?

Your message to the automake list went through fine.  And all of the
lists are configured virtually identically.  The spam filtering is
identical between them although the list configurations are
independent.

I looked through the spam trap archive and could not find any messages
posted from your address.  I looked at the automake-patches list and
you are subscribed there where puts your address in the whitelist.  No
checking will be done and any mail with your from address will pass
through without filtering.  On the bug-automake list I see your
address in the whitelist which means messages from you were posted
there and accepted there previously as a non-subscriber and added to
the whitelist.

Did you possibly post from a different address previously?  Was it
different form the one you just now used to post to the list?

I am not really sure what to suggest might be the problem.  But it
would seem to be a problem between your site and the gnu.org site
where the messages never arrived at gnu.org.  If you could look in
your MTA logs to see what the disposition of that message was it would
probably point to a problem there.

Bob




Re: Filename extension guidelines and man page symlinks

2006-06-19 Thread Bob Proulx
Roy Hills wrote:
> I have two questions, which I hope that someone can help me with:
> 
> 1.  Are there any guidelines on file naming conventions for Perl scripts?
> 
> In particular, should it be called "foo.pl", or just "foo"?  Currently, I'm
> using the latter naming convention, and include them with the following
> line in Makefile.am:
> 
>dist_bin_SCRIPTS = get-oui.pl arp-fingerprint.pl

You provided conflicting information.  You said "the latter" which
means you are distributing them as "foo" but your Makefile.am says
that you are distributing them as "foo.pl".  This confuses me.

The implementation of an action should not be exposed to the users.
If 'make' were written in perl would the user need to type in
'make.pl'?  Then if the implementation were changed to ruby with
*identical* functionality would we force all users to change how they
call it and to type in 'make.rb'?  Then if the implementation were
changed to C would we then force the users to call it 'make.c'?

I believe the implementation should be hidden from the user.  It is an
abstract data type.  As a user I don't want to see it.  For one larger
example, the Debian Policy specifically says not to name files with
implementation specific endings like .sh or .pl.  (See section 10.4.)

Best to avoid encoding the implementation in the name.  Just call it
'foo'.

I use this to my advantage.  I edit foo.pl as the source file and have
make generate foo and replace @VERSION@ and other variables in the
script.  By having different source and distributed files it gives me
a nice way to expand variables easily.  I used the automake
documentation in section 8.1 Executable Scripts for a template.  You
can find the documenation here.

  info automake Scripts

Bob




Re: (Slightly OT) including svn revision in config.h

2006-07-25 Thread Bob Proulx
Jim Lynch wrote:
> Does anyone have a suggestion as to how I can automatically keep the svn 
> revision number in a config file somewhere?

  http://subversion.tigris.org/faq.html#version-value-in-source

Bob




Re: config.guess comments from our sysadmins

2006-08-11 Thread Bob Proulx
Ralf Wildenhues wrote:
> * Ed Hartnett wrote:
> > Ralf Wildenhues writes:
> > > It's safe to just replace the two files with newer versions; I think you
> > > should keep the two in sync though.

I thought I would note that both Debian and Red Hat packaging of
programs that use config.guess and config.sub do automatically copy
into the project tree the system copy of those files at the time the
package is build.

Presumably when a new system type is being ported to the system copy
of those scripts will be updated to work on that system and then be
available for compilation.  Building the package will then get the
local system copy which works no that system.  This avoids the need
for all of the files to be correct in all of the upstreams, which is
pretty much impossible.  But using a system version on each system
works reasonably well.

> > OK. Should I replace them in every project and check them into the
> > codebase?
> 
> If you keep generated files in your version control system, then yes,
> I'd do that.  But only then.  I would generally recommend against this
> strategy (there are arguments for and against it, see 'info Automake
> CVS'; the fact that config.* are not generated does not alter the line
> of thought much here).

I personally don't keep them in version control.

> > Or can I update the automake copies of the files
> 
> Yes, that is a good idea...
> 
> > so that all my autotools projects will take advantage of them when I
> > do autoreconf -i?
> 
> ...not quite, that won't install files over already-present ones.  You
> need to use the --force argument for it to do so.

I suggest doing what the big distros are doing and manually copy over
the project files with the system copies.  Here are some examples.

On Debian the autotools-dev package README.Debian suggests this to
update these files in the package build.

  Just add:
   -test -r /usr/share/misc/config.sub && \
cp -f /usr/share/misc/config.sub config.sub
   -test -r /usr/share/misc/config.guess && \
cp -f /usr/share/misc/config.guess config.guess
  to the clean target of debian/rules.

On Red Hat the definition of the %configure macro in the
/usr/lib/rpm/redhat/macros file does this to automatically update
these files in the package build.

  for i in $(find . -name config.guess -o -name config.sub) ; do \
   [ -f /usr/lib/rpm/redhat/$(basename $i) ] && %{__rm} -f $i && 
%{__cp} -fv /usr/lib/rpm/redhat/$(basename $i) $i ; \
  done ; \

The point of this is that automatically updating the config.guess and
config.sub scripts is considered a good thing.

Bob




Re: config.guess comments from our sysadmins

2006-08-12 Thread Bob Proulx
Bob Friesenhahn wrote:
> It may be considered a good thing to some, but it is not necessarily a 
> good thing.  Consider that the rest of the build package (e.g. 
> libtool) is expecting particular host identifications and that 
> sometimes the behavior of these scripts change.

Only very rarely will a particular platform change names.

> Also, consider that the versions available for a given platform may
> actually be older than the ones that come in the package.

Yes.  But working on that particular system so it won't matter.  New
config.guess scripts are almost always required on new platforms.  As
new platforms are supported the script is updated.  But on any given
system once working it probably never needs to change again for that
system.  Within reasonable limits of script improvements of course.

> You are breaking a matched/tested set of tools.

Almost always when porting applications to hppa, ia64, amd64, etc. as
at the time that I wanted to compile on those platforms they were new
and I almost always needed to update the config.guess to support those
platforms.  New platforms are almost always that way.  Mature
platforms with better support just work out of the box with any
version.  So it does not matter in those cases.

I think someone reading your response will then be scared to update
the config.guess because you say it breaks a matched set.  But I think
that fear is unwarranted and in most cases the script *must* be
updated in order to work on new platforms.

Bob




Re: Forcing static link of libstdc++

2006-09-21 Thread Bob Proulx
Mike Melanson wrote:
> It's possible that I'm chasing after the wrong solution. This is a more 
> specific problem:
> 
> * I have a proprietary program that I am trying to build to run on a 
> wide variety of Linux/x86-based distributions.
> 
> * The build process links against libstdc++.so.6 on the build machine.
> 
> * The program fails to run on older systems that only have libstdc++.so.5.
> 
> * Thus, I have been seeking to statically link libstdc++.so.6 inside the 
> binary. Not sure if this is the right or optimal solution, though 
> previous versions of this same program -- using an ad-hoc Makefile 
> solution -- took this route.

What I suggest is to bundle up all of the shared libraries called by
the application and then including them in your installation bundle.
If your only issue is ancillary shared libraries then simply reference
them through LD_LIBRARY_PATH set in a invoking wrapper script.

But for the me and I think for you as well but you just have not hit
it yet the problem is libc which houses the dynamic linker which links
in the other libraries.  The older systems will have an older libc and
I have built against a newer one.  Therefore I need to bundle the
linked-against libc for those machines.  I am installing centralized
applications on a shared NFS filesystem.

Unfortunately libc is the one library that cannot be overridden by
LD_LIBRARY_PATH.  But it can be selected explicitly by invoking ld.so
directly using the --library-path option.  I do this routinely for the
same reasons as you list and it works quite well.

Given the following setup copy the newer glibc and lib parts to that
location.

  ./mylib/ld-linux.so.2
  ./mylib/libc.so.6
  ./mylibexec/myprog

Then this script in ./mybin/myprog works:

  #!/bin/bash
  MYAPPDIR=$(dirname $(dirname $0))
  exec -a $MYAPPDIR/mybin/myprog $MYAPPDIR/mylib/ld-linux.so.2 --library-path 
$MYAPPDIR/mylib $MYAPPDIR/mylibexec/myprog "$@"

This allows you to run a specific glibc independent of the one
installed in /lib.  Include with libc all of the shared libraries that
are specific to the program and desired to be used instead of the
system installed versions.  This allows a program compiled on a newer
machine with newer libraries to run on older machines.  This allows a
program to be system distribution neutral.

I usually create a directory named after the program.  I usually don't
call them "my"bin but just have them called "bin", "lib", "libexec",
etc.  I used the "my"dirs above to emphasize that this is not a system
installed location.

  ./myproject/lib/ld-linux.so.2  # ld.so
  ./myproject/lib/libc.so.6  # glibc
  ./myproject/lib/libstdc++.so.6 # your specific example lib
  ./myproject/libexec/myprog # compiled binary
  ./myproject/bin/myprog # wrapper script to launch it

You should pick the right version with respect to tls, nptl, etc.

Hope that helps.  If something is not clear feel free to ask further
questions.

Bob

P.S. This does not seem to be a very well known technique.  In fact it
seems to be a best kept secret because I have seen this question asked
a few times and have been posting essentially this same information in
a few different places.

  http://gcc.gnu.org/ml/gcc-help/2006-07/msg00126.html
  http://svn.haxx.se/users/archive-2005-05/1727.shtml

But this is basically old news.  Here is a useful Reference:

  http://www.novell.com/coolsolutions/feature/11250.html




Re: Forcing static link of libstdc++

2006-09-25 Thread Bob Proulx
Mike Melanson wrote:
> Sounds like a useful possible solution. However, what if the primary 
> functionality actually resides in a shared library itself?

As shown by ldd on the shared library?  Those will be loaded using
LD_LIBRARY_PATH so in that case you don't need the complicated wrapper
for libc using ld.so but just the simple one that only sets the
environment variable for the dynamic loader to follow.

Unless the shared library is using rpath.  In which case it will use
the compiled in path first and only if that fails will it fall back to
using LD_LIBRARY_PATH and if that fails fall back to using whatever is
configured for the system in /etc/ld.so.conf.  (I do not like rpath
because it freezes the configuration into an inflexible state.)

> This is a proprietary plugin that another program is expected to
> load and call. Is there some sort of wrapper trick for that, or is
> that up to the communication between the app and the plugin?

Everything depends upon how the application is loading the shared
library.  If it done by the linker then ld.so will load it and
LD_LIBRARY_PATH should work to find it.  If the application is
dynamically loading a specific file from the filesystem then you are
stuck with whatever the application allows you to adjust.

All of this is assuming that I did not get some of this wrong.

Bob




Re: [automake] Debian and its alternatives' system.

2006-11-02 Thread Bob Proulx
Benoit Perrot wrote:
> On debian, several version of the same package may be installed,
> and the default, prefered one is selected by providing a
> symbolic link pointing to it.

Yes.  Very nice.

> So, I was wondering if there was a way to select the automake
> path or exe to use, or if patching AM_INIT_AUTOMAKE could be
> the solution.

You can always set PATH to $HOME/bin and in there make a symlink or
wrapper script from 'automake' to 'automake-1.9' or whatever.  Because
it is in your PATH ahead of the /usr/bin/automake location it will be
found first and used.

Bob




Re: c++ link order problems

2006-11-28 Thread Bob Proulx
Dan McMahill wrote:
> Ralf Wildenhues wrote:
> >Just curious: what's the reason for the ordering constraint?
> 
> When static objects use inheritance, the base class must be initialized 
> before anything can be derived from it.  At least that's what I've been 
> told.  On this particular project, I'm just the build system monkey and 
> not the c++ programmer ;)  In fact, I'm not a c++ programmer at all.  In 
> the c programs I've written the link order wasn't important.

IIRC C++ says that the initialization order of global static objects
is either implementation defined or undefined, I don't remember which,
and so basically one cannot count on the order.  This means that any
application that is depending upon this order is buggy.  A correct C++
program won't have a link order dependency.  This is probably the same
with Benoit Sigoure's issue with Sony's SDK too.

There is a C++ design pattern to guarentee initialization of global
objects and the iostreams use it these days.  In the old days before
the iostreams did this they could not be used in constructors or they
would core dump because of being uninitialized.  Sounds like you have
the same problem.  You should file a bug with the developers.  Have
them look to see how the iostreams do it these days and do something
similar.

Bob





Re: verbosity

2007-01-12 Thread Bob Proulx
Jason Kraftcheck wrote:
>  This makes it *very* easy to miss potential important compiler warnings
> and such in all the noise.

I have heard this infrequently from posters but I don't experience
this myself and here is why.  I think I will go out on a limb and say
that most (many?)  developers use automated tools to walk through
every warning and every error output from the compilers when building
their projects.  Because of this I find it unlikely that anyone doing
this would miss a warning or an error.  In fact because of this
warnings are an extra annoyance and will usually get fixed.  (Even
without -Werror.)

My personal tool of choice for this is emacs.  But before you balk at
that (because if not then you would not have been asking your
question) let me say that surely vim, kdevelop, or other IDEs also
have capabilities in this area.  Instead of trying to hide useful
output from the build process let me suggest instead that you
investigate using an improved IDE to build and develop your project.
An IDE that walks through the warnings and errors removes much of the
drudgery.  I highly recommend this.

Barring this I would use gcc's -Werror option to make all warnings
into errors.  This way warnings will not be missed.  But I realize
that you said this was a legacy application.  Cleaning up a legacy
application to be completely warning free can be a challenging
process.  I am facing that prospect (again) myself right now.

Just my two cents...

Bob




Multiple lex scanners? How?

2007-01-15 Thread Bob Proulx
Multiple flex generated scanners are giving me trouble with duplicate
symbols.  I am using automake-1.10.

What is the best practice for organizing a program that includes
multiple lex generated scanners?

I am using the recommended practice of using defines to rename all of
the yacc/lex symbols as described in the automake manual
(e.g. "#define yylex c_lex", "#define yyerror c_error", etc.) for the
yacc generated parser.  The problem I encounter is that yyin is hard
to get redefined.  Flex outputs a definition for it before outputing
my define for it in the top of the scan.l file.

Basically what I have looks like this:

  myprog_SOURCES = parse.y scan.l
  BUILT_SOURCES = parse.c parse.h scan.c
  AM_YFLAGS = -d

That is all well and good but then yyin is a global symbol defined by
the flex generated scanner.  This appears before I can redefine it
inside the source.

I guess I can add an AM_CPPFLAGS=-Dyyin=myprogin and list out all of
the symbols to redefine there.  But that fails to work if I have two
flex generated scanners in the same directory.  I would rather find a
more general solution.

If I add an option to flex to rename the symbols like this then the
scanner flex generates renames the symbols for me.

  AM_LFLAGS = -l -Pmyprog

But in this case ylwrap fails because -P also renames the output file
to lex.myprog.c and ylwrap fails to find the produced file.  And it
has the same problem of failing if there are two flex generated
scanners in the same directory.

I am about ready to write a custom rule to sed the generated file.  I
am sure that will work but I must be missing something obvious since
this is a common enough problem that there must be a standard
solution that I am missing.

Thanks
Bob




Re: Multiple lex scanners? How?

2007-01-17 Thread Bob Proulx
Nicolas Joly wrote:
> Bob Proulx wrote:
> > What is the best practice for organizing a program that includes
> > multiple lex generated scanners?
> 
> I encountered the same problem ... and made the following constructs
> that did the trick (at least for me).
> 
> AUTOMAKE_OPTIONS = subdir-objects
> 
> AM_YFLAGS = -d -p `basename $* | sed 's,y$$,,'`
> AM_LFLAGS = -s -P`basename $* | sed 's,l$$,,'` -olex.yy.c

That is very clever.  I like it.  The -P option solves the renames
completely but flex renames the output file but the -o solves that
problem.  Very nice!

In my case I might need to generate a mapping from filename to symbol
name (instead of your case using sed) because the legacy code I am
dealing with does not use the filenames for the symbol names.  In my
case the filenames are long and the symbols are short.  But I can deal
with that reasonably enough.

So for the mail archive my case might be:

  AM_LFLAGS = -s -P`basename $* | sed 's|ascscan|asc_|;s|gdsscan|gds_|'` 
-olex.yy.c

I expect that mapping to be very filename and symbol name specific.

> Hope this helps,

Very much so.  I think there might be room for improvement in some way
to make this easier but I am not sure what to suggest.

Thanks!
Bob




Re: How do I write a configure.ac file to default sysconfdir to /etc?

2007-01-27 Thread Bob Proulx
Jim Lynch wrote:
> I'd really like a way to enable this for specific applications, not
> as a site default. ... ... ...  If not, I'll just have to go back to
> my cludgy way of adding my own rules to copy it to a hard coded
> /etc, (Ugh).

What I do is to keep a configure.sh script in the parent directory
with all of the options that I want specified.  Then when configuring
those special applications I run the script.  This makes it more
self-documenting what I am doing and I can remember it later.

Example "../configure.sh" script:

  #!/bin/sh -x
  PATH=/usr/local/build/coreutils/bin:$PATH
  ./configure --prefix=/usr/local/build/coreutils "$@"

Then in my build directory:

  sh ../configure.sh
  make
  make check
  make install

In your case you would want to set --sysconfdir=/etc.  Something like
this:

  #!/bin/sh -x
  ./configure --sysconfdir=/etc "$@"

Bob




Re: init.d script best practice

2007-02-10 Thread Bob Proulx
deckrider wrote:
> Is there a best practice example for using autoconf/automake to
> install system init scripts?  For instance, HP-UX looks for these in
> /sbin/init.d and /sbin/rc*.d and many others look to /etc/init.d and
> /etc/rc*.d.

I would recommend not using automake for this.  The format of the boot
time rc scripts are quite different from system to system.  Even on
one system across major versions changes happen and the boot time
scripts would need to track those changes.  Therefore it would be
difficult to write something portably in the spirit of automake.

However I do use 'make install' frequently to install into the
/usr/local tree for my own development purposes.  [I do this as a
non-root user being part of the 'staff' group.  This is very Debian
but I propagate that design to HP-UX and other places.]  Doing so is
convenient for development and if that is what you are intending then
this is quite reasonable.  But I would never pass this on to others to
install in that same way.  For others I would make a proper
installation package.

For your own development purposes you could extend automake and set up
install-exec-hook (or install-data-hook) to install your script to
/sbin/init.d/ at installation time.  Off the top of my head and
completely untested something like this:

EXTRA_DIST = bootscript

install-exec-hook:
cp $(srcdir)/bootscript /sbin/init.d/
cd /sbin/rc3.d && $(LN_S) ../init.d/bootscript S900bootscript
cd /sbin/rc4.d && $(LN_S) ../init.d/bootscript S900bootscript
cd /sbin/rc5.d && $(LN_S) ../init.d/bootscript S900bootscript

uninstall-hook:
rm -f /sbin/rc3.d/S900bootscript
rm -f /sbin/rc4.d/S900bootscript
rm -f /sbin/rc5.d/S900bootscript
rm -f /sbin/init.d/bootscript

That would of course be an HP-UX specific configuration.  I would feel
uncomfortable putting that in the Makefile.am.  Therefore the way I
would make use of it would be to have a GNUmakefile that included the
full Makefile and also Makefile.maint (maintainer's makefile) and put
these rules in the Makefile.maint file.  Using GNU make it will use
GNUmakefile first which will include both of the others and everything
will work as expected.  A convenience is that in Makefile.maint you
can guarentee that you always have GNU make and therefor use extended
syntax.  As a developer I would be using GNU make and so this would
work fine.  I don't normally distribute the Makefile.maint file and
simply keep it for my own purposes such as the above but others do.
See the automake distribution and a few others for real life examples
of GNUmakefile and Makefile.maint files.

Bob




Wishlist: Clean target for generated C files?

2007-02-27 Thread Bob Proulx
I would really like a clean target that would remove generated source
files such as generated .c and .h files.  In the case of yacc and lex
I would like to distribute the generated files so as not to require
the use of yacc and lex to compile the distribution.  This rules out
DISTCLEANFILES or that would work.  But they are not true source and
so at times it is useful to delete them and rebuild them.  Such as
when asking the question, which version of flex works and which fails?

It seems that I really would like another clean target that is more
than CLEANFILES but less than MAINTAINERCLEANFILES that is not called
when making a distribution.  MAINTAINERCLEANFILES removes the Makefile
and running autoreconf and configure on this particular project takes
a very long time.

Just wishing...

Bob




Re: Wishlist: Clean target for generated C files?

2007-03-01 Thread Bob Proulx
Perrog wrote:
> 2007/2/28, Bob Proulx <[EMAIL PROTECTED]>:
> >I would really like a clean target that would remove generated source
> >files such as generated .c and .h files.  In the case of yacc and lex
> >I would like to distribute the generated files so as not to require
> >the use of yacc and lex to compile the distribution.  This rules out
> >DISTCLEANFILES or that would work.  But they are not true source and
> >so at times it is useful to delete them and rebuild them.  Such as
> >when asking the question, which version of flex works and which fails?
> 
> Do you mean having a template file with extra rules feeded to automake
> command, containing a target like CLEANDERIVED concatenated to every
> Makefile.am file. Or do you mean adding a new primitive
> _DERIVEDSOURCES to auto-makefiles with a list of intermediare/derived
> files? Or both?

I was thinking that something like the MOSTLYCLEANFILES behavior which
supports 'make mostlyclean'.  An additional MORECLEANFILES variable
supporting a 'make moreclean' target that includes MOSTLYCLEANFILES
and CLEANFILES and additionally MORECLEANFILES would be perfect.

I mentioned yacc and lex but in my case there is also a project
specific code generator that produced .c and .h files from a .d
definition file.  I was modifying the generator.  Conceptually it is
similar to yacc and lex paradigm of generated source files that
everyone knows though.

Putting the clean behavior on a scale they look like this following
with to the left being less clean and to the right being more clean.

  <-- less clean -- more clean -->
MOSTLYCLEANFILES
CLEANFILES
DISTCLEANFILES
MAINTAINERCLEANFILES

AFAICT the expected use model for mostlyclean would be for after the
build.  That would tidy up some by removing the *.o files but leave
behind *.a libraries and program binaries.  This is not as useful for
me personally but I can see where it would be useful for others.

  ./configure
  make
  make mostlyclean

What I am wishing for is something more like this.

  <-- less clean -- more clean -->
MOSTLYCLEANFILES
CLEANFILES
MORECLEANFILES
DISTCLEANFILES
MAINTAINERCLEANFILES

With a use model more like this somewhat stylized example.

  ./configure
  make
  make check
  ...modify code generator...
  make moreclean
  make
  make check
  ...modify code generator...
  make moreclean
  make
  make check

Certainly right now I can 'make maintainer-clean' and then bootstrap
the project again to get the same result.

  ./configure
  make
  make check
  ...modify code generator...
  make maintainer-clean
  autoreconf --install
  ./configure
  make
  make check
  ...modify code generator...
  make maintainer-clean
  autoreconf --install
  ./configure
  make
  make check
  
But then I must also autoreconf the project.  With large projects this
can be a very long amount of time.  That large time penalty is painful
when needing to be done repeatedly in the interactive development
cycle and best avoided.  And in that case I must also have all of the
autotools available on that platform to support it whereas with the
proposal only the distributed configure support is needed.

[Instead I used find and generated a list of .y and .l files and then
converted them into a list of potential .c and .h files and then
generated a script from this to remove those generated files creating
my own private moreclean functionality.  That worked but it was not as
nice as it could have been.]

Bob




Re: Wishlist: Clean target for generated C files?

2007-03-01 Thread Bob Proulx
Ralf Wildenhues wrote:
> * Bob Proulx wrote on Thu, Mar 01, 2007 at 05:37:56PM CET:
> > With a use model more like this somewhat stylized example.
> [...]
> >   ...modify code generator...
> >   make moreclean
> >   make
> 
> Looks to me like if your generated code had proper dependencies you
> would not need the 'moreclean' step.  Is that observation correct?  
> If no, what am I missing?  If yes, then let's see why you have not
> (or can not?) describe the dependencies properly within a Makefile.am.

Do you normally write "proper dependencies" so that when yacc or lex
is updated that targets using them become out of date?  I don't see
any such dependencies in automake generated Makefiles that I can see.
Is that an additional step that is recommended when using .y or .l
source files?  Should the manual be updated to say that?  (Obviously I
am thinking that it is not and simply going with the topic for debate
as written. :-) :-)

Here would be what I think would be a minimum Makefile.am for
discussion.  Should this be sufficient?

  bin_PROGRAMS = foo
  foo_SOURCES = foo.l
  BUILT_SOURCES = foo.c
  EXTRA_DIST = $(BUILT_SOURCES)

I was avoiding being too system specific by saying "modify code
generator" but perhaps I should have been very specific to avoid this
confusion.  And I had both the flex case and a project local script
generator too.  It seemed to be rather of a generic problem.

  sudo apt-get install -q -y flex-old
  ./configure
  make
  make check
  sudo apt-get install -q -y flex
  make
  make check

As far as I can tell an automake generated Makefile will not detect
that I have changed versions of flex.  Hence the desire for a clean
target that would clean generated source when desired.

But I don't think this is something that should be checked
automatically for system dependency changes.  I have previously worked
with projects using 'mkmf' which *did* write in those dependencies on
system things and overall it was more trouble than help.  I don't want
to suggest that.  Things would be worse in that case.

This is not a big deal.  It was annoying me and so I was just wishing
is all.

Bob

P.S. I actually don't like using yacc and lex too much these days
because they are always so messy with regards to this and many other
things.  But some projects have them and the one I am working with has
many of them.




Re: generated ChangeLog

2007-03-13 Thread Bob Proulx
Stepan Kasal wrote:
> Andreas Schwab wrote:
> > If you use AUTOMAKE_OPTIONS = foreign then automake should not complain.
> 
> yes, this is good note, thanks.  I wanted to keep the gnu strictness,
> since I hoped Automake would also check other things for me.
> 
> But perhaps this is the best solution, after all...

If the choice is a generated ChangeLog or using 'foreign' then a third
choice would be to use ChangeLog as a small pointer describing how to
get the change log information from version control.  I do that on
some work projects that have not previously kept a ChangeLog.  If a
ChangeLog file exists then the check is satisfied.

Bob




Re: Shouldn't the definition of maintainer-clean be changed?

2007-03-17 Thread Bob Proulx
Hello Stepan,

Apologies to all for continuing the large crossposting.  I am not
subscribed to those lists.

Stepan Kasal wrote:
> For details, see my post here:
> http://lists.gnu.org/archive/html/autoconf/2007-03/msg00043.html
> 
> But people tend to guess that this target must be the opposite to
> bootstrapping from CVS.

First off let me say that I was perfectly aware of the standards for
make maintainer-clean when I posted my response to that message.
There is no standard target to perform the desired operation.  That
poster had a very particular set of needs.  My suggestion there did
not in any way reflect a "standard" use of automake or use of the gnu
standards.  It was a very targeted (ab)use of the tool.  I knew that.
I was not proposing a modification of the standards.

The user was wanting to do something non-standard, but not
unreasonable, and expanding upon using the MAINTAINERCLEANFILES seemed
like the easiest way to accomplish that.  I don't think the user had
any intention of distributing the code as a GNU project.

> Moreover, I noticed that AutoGen tries to use maintainer-clean in
> this twisted way.

I have not looked at AutoGen nor how it is using these tools.

> Another example: when I submitted a patch that removed Makefile.in
> from MAINTAINERCLEANFILES to HAL, I got told that using
> `maintainer-clean' to delete everything generated by autotools has
> become a ``common practice'':
> http://lists.freedesktop.org/archives/hal/2007-March/007667.html

A quote from Jon McCann:
> > I suspect the reason that so many people use this practice is that
> > it solves a very common problem for maintainers, namely: how to
> > clean all files generated by autogen.sh.

I found that a quite reasonable statement.  The practical concerns of
needing a clean target win over the philosophical desire to
standardize but not having a useful standardized operation to use.  I
agree that as a GNU project it should follow the standardized use of
the defined targets.  It is a shame to remove functionality not
otherwise provided in order to do this.  In defense when using the
maintainer targets you are assuming the role of a maintainer and more
technical capability is assumed and often required in that case.  I
doubt this issue is actually causing a real problem.

> I'm afraid that this might become a big mess.  I think that the GNU
> standardization crew might help here.

I respectfully suggest that a standards committee driving design may
not be the best way to do this.  It would be better to implement the
new behavior first and then after it is proven useful and effective
then drive changing the standards to use it.

> There is a strong need for an un-bootstrap.  Which command shouls
> fill the gap?

It would be good to have some improved functionality in this area.
See also my posting asking for a clean target for generated source
files.  [All it needs is someone to actually do the work. :-)]

  http://lists.gnu.org/archive/html/automake/2007-03/msg1.html

> If `make maintainer-clean', then the GNU Standards should be changed
> to reflect this.  The obvious disadvantage is that if the
> bootstrap&&configure does not finish, maintainer-clean is not usable.

If configure does not finish then no Makefile based target is usable.
Which may have been your point.  But I think it is safe to assume a
working system and in an working system configure will finish and
Makefile targets will be available.

Bob




Re: Placing libraries in single central directory

2007-04-05 Thread Bob Proulx
antoon wrote:
> By default this library is placed in the same directory as the C-files.

After the result of 'make' yes.

> But ...  how to copy this library afterwards to a general *lib* directory,
> because  during the overall linkage step of my code, I want all my libraries
> in one single directory.  You can do something like this, by make install
> but that's not what I mean.

I don't believe automake is designed to do that as it is right now.
However you can use symlinks to simulate this.

In the directory that you want the .a files to appear create a symlink
from there to the directory in which you are building your libraries.
In that way they will appear to be in the other directory that you
want them to appear.

Bob




Re: Circular dependecy linker trouble

2007-06-07 Thread Bob Proulx
Søren Boll Overgaard wrote:
> Hello,
> 
> I've recently migrated a rather large body of code from a proprietary 
> development environment, to an automake based one.
> 
> I've run into trouble during linking though. The code is laid out like 
> this:
> 
> src/ 
> Contains main.cpp which holds the main method
> 
> src/framework/ 
> Contains framework code. All files in here are packed into an archive 
> libframework.a.
> 
> src/services/
> Contains service related code. All files in here are packed into an 
> archive libservices.a
> 
> No other programs need these archives, so I guess I could just as well 
> link with the object files directly, if I could determine a clean way of 
> doing that.

I would keep the libraries.  It encourages modularity.

> Framework code depends on services code and vice versa. 
> Linking fails to resolve dependencies.

Even if modularity is observed.  :-)

> The linker command line essentially looks like this:
> 
> g++  -g -O2 ../src/framework/libframework.a ../src/services/
> liblmsservices.a  -o program main.o  -lpthread

That is the new linker line, right?  What was the previous linker line
that you are replacing?  I expect that the libraries were listed
multiple times previously or it would not have worked with the prior
build system.

> Can anyone explain to me what I am doing wrong?

Effecitvely libraries are searched left to right and .o files are
pulled out of them as they are needed.  When circular dependencies
between .a files exist then you will need to list the .a files
multiple times on the link line so that they are processed more than
once.

Of course breaking the circular dependencies would be best but I
understand that this is legacy code and that you are moving forward
incrementally, a common practice.

I suggest using LDADD or PROG_LDADD (where PROG is the name of the
program as it appears in the _PROGRAMS variable, usually lower case)
and adding the libraries again there.  (Although if others on the list
suggest better things please go with their suggestions.)

Because you are already getting these libraries on the command line by
some means I think this following should work.

  LDADD = ../src/framework/libframework.a ../src/services/liblmsservices.a

If you were getting those libs onto the command line by that method
already then you need to double them so that they appear twice.

  LDADD = ../src/framework/libframework.a ../src/services/liblmsservices.a 
../src/framework/libframework.a ../src/services/liblmsservices.a

Bob




Re: Automake violations of the gnu coding conventions

2007-06-18 Thread Bob Proulx
K. Richard Pixley wrote:
> I notice that automake is currently generating Makefiles that violate 
> the gnu coding conventions.

Hmm...  I don't think that automake violates the standards.  In the
normal case it is not required to have automake installed.  Someone
who is simply building from the generated Makefiles never needs to
have automake installed.  Only a developer who is modifying the
automake source files would need automake installed.

Obviously from the questions you are asking you are experiencing a
specific problem.  Could you share some details?  I would rather see
the problems you are experiencing addressed rather than see the
generated Makefiles crippled.  There is always AM_MAINTAINER_MODE but
I think the cure is many times worse than the disease.

Meanwhile, let me address some of your points.

> Specifically, it's generating rules for rebuilding "Makefile" from
> "Makefile.in" and "Makefile.in" from "Makefile.am" which requires
> automake.

Rebuilding the generated Makefile.in file is only needed if the source
file is modified.  I don't see a good alternative in that case.  If
the automake source file is modified then presumably the person doing
the modification wants the change and then automake would be required.
Anyone doing this would fall into the developer role and there is an
expectation that developers must have the appropriate tools available.
If you are not modifying the automake source files then automake is
not required.

> Also, in the past, there were coding standards that prohibited Makefiles 
> from writing into $(srcdir).  The problem here is that source code 
> directories might be read-only as comes from a cdrom, or from a shared 
> but not owned source directory, (perhaps on a network server).

Please use VPATH builds for read-only source directories.  Example:

  mkdir /tmp/project
  cd /tmp/project
  /media/cdrom/project/configure

That is an example build from a read-only source directory.

Note that 'make dist-check' simulates read-only source and is one
check that projects can build from a read-only source tree.

> Note that these problems are particularly troublesome when one uses a 
> file transfer method for copying or moving source trees that doesn't 
> necessarily retain last modification time stamps, like source code 
> control systems, "cp", etc.  In these cases, the source directory 
> becomes unbuildable.

When filestamps are munged then the simple action of touching all
files to the same timestamp is a good workaround.

  find . -type f -print0 | xargs -r0 touch -r configure.ac

Normally I would avoid munging timestamps of files.  However in the
case that they are already munged then I see no additional problem
with getting them all into a consistent state.

Again, I would appreciate it if you would describe the problem that
you are experiencing that has led you here with a complaint about
automake.

Bob




Re: Automake violations of the gnu coding conventions

2007-06-18 Thread Bob Proulx
Are we talking about one of your own projects?  Or are we talking
about other projects that you are trying to build?

K. Richard Pixley wrote:
> Bob Proulx wrote:
> > Someone who is simply building from the generated Makefiles never
> > needs to have automake installed.  Only a developer who is
> > modifying the automake source files would need automake installed.
>
> But that's my point.  With the defaults as they are now there are many 
> "normal" cases where automake is required.

But the cases that you have described so far are not normal cases.
They all entail doing something that can be reasonably avoided.

> > Obviously from the questions you are asking you are experiencing a
> > specific problem.  Could you share some details?
>
> I have already done so.  The actual use case is somewhat more involved 
> than is necessary to explain the problem.

Reading your mail carefully I note you say that 'cp' does not retain
file timestamps.  Instead of using 'cp -r' use 'cp -a' to preserve the
timestamps.  An easy problem to avoid.  Or "correct" them later by
updating all of the timestamps.  Either works.

If someone is trying to build from source control then they have
assumed the role of a developer.  Developers are expected to have the
required development tools available.  Almost certainly in the case of
checking out pristine source from version control many developer tools
will be needed for most projects.  This goes beyond automake and
includes gettext, flex, bison, gperf, texinfo, etc.

Bob




Re: need help in "deep" project ..

2007-07-03 Thread Bob Proulx
Ralf Wildenhues wrote:
> * Roberto Alejandro Espí Muñoz wrote:
> > AC_INIT([/src/main.cpp])
>
> Also note that there is a new form of AC_INIT/AM_INIT_AUTOMAKE,
> used and explained in the manual:
> 

To give an example, try using this instead of what you have now.

  AC_INIT([hmi], [0.3])
  AM_INIT_AUTOMAKE([-Wall])

Bob




Re: ${} and $()

2007-08-19 Thread Bob Proulx
Jan Engelhardt wrote:
> is there any real difference between $(var) and ${var}, and is the 
> latter as much POSIX as the first?

Is there any reason not to use $(var) and simply play it safe since
that is the traditional Unix make behavior?  Then all worries about
whether the builder's make will handle ${var} the same as $(var) are
removed.  If it were me I would play it safe.

Bob




Re: including files in the distribution tarball

2007-09-10 Thread Bob Proulx
James Willenbring wrote:
> Some of our developers find it especially painful to list individual files.
> Is there any way that we can add all files with a certain suffix to the list
> of files that are included in our distribution by default, or even include
> files that match some pattern?

The standard answer to your question is given by this documentation:

  http://sources.redhat.com/automake/automake.html#wildcards

The developers (me too) feel that it is better to avoid including
files dynamically.  Personally I have worked with 'mkmf' and have
experienced problems with people dynamically including files that were
not desired.  That experience puts me on the side that likes the files
to be explicitly listed.

> The specific issue we are looking at right now is how we can add a
> large number of documentation files to the tarball without listing
> each file.

You could script the editing of the Makefile.am file such that they
are populated with the list of files that you want.  Using commands
such as 'find' can generate the file list very easily.

  find . -name '*.txt' -printf '  %P \\\n' 
  find docdir -name '*.txt' -printf '  %p \\\n' 

Then simply include that file list into the Makefile.am file.

Bob




Re: strange choice of compiler on HP-UX

2007-09-26 Thread Bob Proulx
Andreas Schwab wrote:
> Joao Miguel Ferreira writes:
> > Question: How do I tell the tools to use only aCC for both types of
> > files, when compiling on an HPUX (we also build on Linux/gcc and
> > Solaris/gcc) ?

If the optional native HP ANSI C compiler is installed and 'cc' is a
symlink to it then compiling C files with 'cc' and C++ files with
'aCC' should be fine, right?  If not then it would be the code with
the errors for not creating the proper external declarations.

As I read your issue what you are really asking is to force C code to
be compiled as C++ and name mangled in the same way avoiding the need
for extern "C" declarations.

> ./configure CC=foo CXX=bar

*I* would probably use the following and code in the appropriate
 extern C declarations.

  ./configure CC=cc CFLAGS="-Ae -g" CXX=aCC CXXFLAGS="-g"

But if you simply want to call C files C++ and force using the aCC C++
compiler for C files too then this should do it.

  ./configure CC=aCC CXX=aCC

> > PS: the cc is a link in /usr/bin that point to /opt/ansic/bin/cc !!! I
> > am not root of this system :-(
> 
> You don't need to be root to change PATH.

Since on HP-UX with the optional HP ANSI C compiler installed 'cc' is
a symlink in /usr/bin and also /bin on HP-UX is a symlink to /usr/bin
resulting in a single large bin directory with everything in it all in
one place so it would be difficult to change this by adjusting PATH.

Bob




Re: Generating 'cat' pages on make install

2007-11-03 Thread Bob Proulx
Jason Curl wrote:
> Continuing with my efforts of making a library designed for Linux a bit 
> usable for colleagues on Windows I'd like to figure out how to install 
> "cat" pages, i.e. conversions of "man" pages.

Hmm...  Would it make more sense to set up 'man' on ms-windows for
your colleagues such that it would generate the man pages correctly?
I would think that would be easier.

The problem is that the location and formating of the man-cat pages is
different everywhere.  Different 'man' systems do different things
with with them because 'man' is not standardized on every system.  The
man-cat pages are cache for the 'man' program and different 'man'
programs have made different choices.

Additionally the terminal column width is a terminal specific
difference that even on the same system with the same user there might
be two different formattings.  This can be mitigated by
forcing/assuming a standard 80-character wide terminal though.

> I can write a script that will do the conversion for me, but I'd ideally 
> like to have a rule that will convert the 'man' page for me to 'cat' in 
> the correct directory when installing.

Unfortunately there is no one "correct" directory.

Personally I would look into teaching them to use man on windows to
view the man pages.  That seems to me like it would be the easier
approach.  And then it would be all set up for other Unix-derived
commands in the future.

Even better for GNU-derived software would be to set up the info pages
so that the info documentation would be available.  For the GNU
project info documentation is the usually is the preferred
documentation format.

Bob




failure in "colorful tests"

2007-11-12 Thread Bob Proulx
This following automake 'make check' finishes successfully.

  env TERM=ansi make -C tests check TESTS=color.test

However this next one has a failure.

  env TERM=dumb make -C tests check TESTS=color.test

And this one is quite colorful! :-)

  env TERM=dumb VERBOSE=yes make -C tests check TESTS=color.test

Bob




Re: What to check into repository?

2007-12-03 Thread Bob Proulx
Hongliang Wang wrote:
> My company decides to make part of our software source code
> open-sourced.  For this part, we will use automake & autoconf tools
> to generate the installation package (.tar.gz).

That sounds wonderful!

> However, the current problem is that we cannot decide what to check
> into our own software repository. The .c and .h files will
> definitely be needed to check in, but what about the Makefile.am
> configure.ac, and even the executable files like missing, compile,
> autogen.sh, aclocal.m4, autom4te.cache/ ?

If you wrote the file by hand then check it in.  If the file was
generated by a tool then don't.  This means that configure.ac and
Makefile.am files should definitely get checked in since they are
source files.  But generated files such as autom4te.cache should
definitely not get checked in.

There is debate about some files such as the ./configure script.  I
strongly do not like it checked in because it is generated and a large
number of spurious differences in version control ensues.  But a
counter argument is when it requires a special bootstrap to generate.
In that case having a seed version in version control would enable
someone to use it to bootstrap things along.

In any case the process that you are looking for is to be able to do a
pristine checkout from version control and then to rebuild
everything.  You did not say what version control system so I will
simply assume one without loss of generality.  The flow would go like
this:

  git clone git://git.example.com/project
  cd project
  autoreconf --install
  ./configure
  make
  make distcheck

Bob




Re: Automake (alpha) release request

2007-12-16 Thread Bob Proulx
NightStrike wrote:
> When you do make a release, where will be the list of new features located?

The NEWS file is the standard location to list new features.

The NEWS file as currently in version control can be seen here:

  http://git.sv.gnu.org/gitweb/?p=automake.git;a=blob;f=NEWS;hb=HEAD

Bob




Re: Automake (alpha) release request

2007-12-16 Thread Bob Proulx
Sebastian Pipping wrote:
> Thanks for your replies! As I understood the Automake
> release is delayed because its licensing info has not
> been updated to GPLv3 yet?

Actually the reverse.  Because the licensing has already been updated
it is now delayed.  Automake installs auxiliary files into your
project.  They have previously had this license wording.

# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.

The FSF wants to handle exceptions very carefully and deliberately.
Therefore the release of the autotools projects have been delayed
while the licensing exception to allow this for GPLv3 has been getting
worked out.

> If that is the case I wonder why new code depends
> on this. Can't you just push the licensing thing
> to a later release if the responsible people don't
> have time for this?

That has been considered but we would like to make the release under
GPLv3.  The code tree has already moved to GPLv3.

> To release a project using LZMA in its Autmake files
> I will need a version number to put as requirement
> and without a release I am not able to do so.
> 
> Please try to find a way for a new Automake release
> before 2008. Let me know if I can help with anything.

There are a number of projects that are also waiting for this
licensing issue to be resolved.  Everyone wants it to happen.  But the
automake team's hands are tied while the FSF works this licensing
issue out.  I can only recommend patience.

Bob




Re: Building automake1.9 with autoconf2.61

2008-01-24 Thread Bob Proulx
Kamaljit Singh wrote:
> I was hoping to create a src tree which would build automake1.9 and

automake-1.9 has been replaced with automake-1.10.1.  You should use
the later version.

  ftp://ftp.gnu.org/gnu/automake/automake-1.10.1.tar.gz

> autoconf2.61 and install them in a central place. Writing a
> configure.ac for this is quite hard if not impossible. The automake
> expects and depends on that the directory where its untarred should
> be called automake-1.9, which makes writing the Makefile.am
> dependent on the version and would some unrequired maintenance of
> configure.ac/Makefile.am.

You should not need to write your own configure.ac.  Use the package's
distributed ./configure scripts.  That is probably the root of your
problems.

> Moreover automake1.9 is dependent on autoconf2.58+ to be present in
> the path that I cant have them in the same sandbox at all !!! I need
> to install autoconf first and then try configuring automake. That
> doesnt seem right.
> 
> Any suggestions, on whether/how this can be managed ?

First install the latest m4 because m4 is required for building the
autotools.

  ftp://ftp.gnu.org/gnu/m4/m4-1.4.9.tar.gz

Then after installing m4 install both autoconf and automake using
their included ./configure files.  This "works for me" and I think
should work for you too.

Bob




Re: make help?

2008-02-03 Thread Bob Proulx
NightStrike wrote:
> I meant more along the lines of, for instance in the gcc project,
> there is 'make all-gcc", "make all-gmp", "make all-mpfr", etc.

Those are all custom local targets.  The problem is the same as if I
were Bilbo asking, "What do I have in my pocket?"  It could be
anything and not truly a fair question.

> I admit that the idea wasn't well thought out; I just know that
> anytime I try using a new project where I need to build specific
> parts instead of just "make all install",

If you only need to install certain bits then I think the best answer
is to make everything but not install everything.  If you stop before
you install it then you can pick out the parts that you want if you
only want certain things.

I am a strong advocate of packaging things that are installed on a
system.  Therefore I usually do the make instal to a DESTDIR location
which is not the live system.  Then I can poke at things and move
things around if needed.  For example coreutils installs everything
into $(bindir) but the FHS and legacy history put specific commands in
/bin and others into /usr/bin.  This is done in the packaging script
and then when the package is installed everything ends up where it is
supposed to be.

> I find myself drudging painfully through a bunch of Makefiles to
> find what I need.  It'd be so much easier to see something like:
> 
> $ make help
> The following are valid targets:
> all
> install
> lib64/lib.a
> custom-target-that-makes-stuff-just-work

As I recall from other postings are you porting software.  Sometimes
you just have to slog through it.  Sometimes upstream authors are
willing to help out and make things easier if they are made aware that
some of the things they are doing make it hard to port.  But porting
to a non-free system gets less sympathy.  Many of them are seriously
broken and it is very frustrating to try to fix them.

Bob




Re: proper autotools ordering?

2008-02-25 Thread Bob Proulx
Hi Karl!

Karl Berry wrote:
> I've been looking through the manuals and code, but not finding a
> definitive answer: is there a canonical/recommended ordering of running
> the autotools, including automake?

I really like the encapsulation offered by 'autoreconf'.

Bob




Re: -lm -lz

2008-03-06 Thread Bob Proulx
Stefan Bienert wrote:
> after an hour of searching, how am I supposed to invoke "-lm -lz" into 
> my compilation? The documentation states something about using "LIBS" as 
> a variable, autoconf also speaks of "LIBS" but nobody tells one what to 
> do with that variable.

LIBS is inherited from autoconf and is documented there.  You can get
a quick usage by invoking './configure --help' to get a help listing
from it.

  ./configure --help | grep LIBS
  LIBSlibraries to pass to the linker, e.g. -l

It would be used by autoconf's generated configure script this way:

  ./configure LIBS="-Something_user_special"

But for automake you should look in the section describing how to link
programs.  You would want to put your flags there.

In the automake manual look for the section "Linking the program".

  8.1.2 Linking the program
  -

  If you need to link against libraries that are not found by
  `configure', you can use `LDADD' to do so.  This variable is used to
  specify additional objects or libraries to link with; it is
  inappropriate for specifying specific linker flags, you should use
  `AM_LDFLAGS' for this purpose.  

 Sometimes, multiple programs are built in one directory but do not
  share the same link-time requirements.  In this case, you can use the
  `PROG_LDADD' variable (where PROG is the name of the program as it
  appears in some `_PROGRAMS' variable, and usually written in lowercase)
  to override the global `LDADD'.  If this variable exists for a given
  program, then that program is not linked using `LDADD'.  
  ...
 We recommend that you avoid using `-l' options in `LDADD' or
  `PROG_LDADD' when referring to libraries built by your package.
  Instead, write the file name of the library explicitly as in the above
  `cpio' example.  Use `-l' only to list third-party libraries.  If you
  follow this rule, the default value of `PROG_DEPENDENCIES' will list
  all your local libraries and omit the other ones.

Since you are referring to third party libraries (libraries not built
by you in your build) using LDADD or PROG_LDADD is appropriate.

Something like this in your Makefile.am file:

  bin_PROGRAMS = foo
  foo_SOURCES = foo.c
  LDADD = -lm -lz

Or for the PROG_LDADD program specific version:

  bin_PROGRAMS = foo
  foo_SOURCES = foo.c
  foo_LDADD = -lm -lz

Bob




Re: Report to stdout like Linux kernel compilation does

2008-04-11 Thread Bob Proulx
John Calcote wrote:
> I love this format because warnings and errors are obvious, and yet
> you get enough output per file to tell you that something's going
> on.

To give you a different perspective, I *hate* that format because it
hides problems and *makes debugging harder*.  I want to see exactly
the command that was executed.  I want to see the entire command.  I
don't want to see an abbreviation of the command.

> The real benefit of this output format is that WARNINGS are obvious.

Again, from my perspective the disadvantage of this is that it *hides*
the commands that produced the errors and warnings.  Hiding this
information makes the build harder to debug.

> Often, in standard GNU/Unix/Linux build processes, warnings just zip
> right by without much notice.

Using the eye to scan through build output to find problems isn't a
reliable method.  It is too easy to miss something when tired or
hurried.  If you are counting on using your eye to find these then
statistically I believe you will have missed some error or warning at
some time in the past.  (I know this from human spam filtering which
has a non-zero error rate.)  It is much better to use a tool to scan
through build output.  Tools are much more reliable than humans for
this type of tedious and repetitive task.

For me I use my editor for this.  I tell my editor to open the file
and position my cursor on the next error or warning.  Since the editor
is walking each warning and error I can't miss any.  Most programming
editors do this these days.  However even without editor assistance
these messages can be extracted using 'grep' or other process.

> My question to the list was: How can I get Autotools builds to be
> quiet, so I can see the warnings? The response that I got was that I
> should just redirect stdout to /dev/null on the make command line.

That is effectively what you would be getting if you threw away the
information of the command that produced the problem.  If you want to
throw it away then okay but please leave it there for the rest of us
who need it.

> For the next couple of years, I very was frustrated with this
> response. I thought it was a cop-out for just not wanting to provide
> the functionality.

It isn't a cop-out.  Instead it is that your requested behavior
*actively hurts* the rest of us who need that information.  If things
were reversed then I would be writing the list begging to please
provide the full build information.  It is easy to throw useful
information away.  After having been thrown away it can't be gotten
back.  It is not symmetrical.

> Then I realized that it was really the Unix way.
> You want to see everything so that you know what's going on.

Yes!

> When you want to clean up the warnings (usually something done near
> the end of a development cycle), you simply build with stdout
> redirected to /dev/null when you run make a few times, and you'll
> see the warnings appear, because they're output to STDERR, not
> STDOUT.

No!  I never build with output redirected to /dev/null.  I always
build with full warnings enabled and clean them up as I go.  Trying to
do a big cleanup at the very end is bad.  It is better to keep it
clean as it goes.

Always build with full warnings enabled.  Always clean up warnings as
they are introduced.  Always keep a warning free build.

> Now -- that said, I really see nothing wrong with my original request,
> in the form of an Automake switch. It would be nice to be able to tell
> Automake to build Makefiles that generate this sort of output.
> Unfortunately, you and I aren't going to get much agreement, I think.

As an option I think it is fine too.  But someone would need to do the
work.

> Perhaps, if you were to write a patch to Autoconf, providing a macro
> or switch that generates such Makefiles... This also the GNU way. :)

Yes.

Bob




Re: Report to stdout like Linux kernel compilation does

2008-04-11 Thread Bob Proulx
Robert J. Hansen wrote:
> John Calcote wrote:
> > Hmmm. I'd have to disagree here. I carefully consider every warning I
> > see, and evaluate whether or not it represents a real problem.
> 
> Yes.  This strikes me as perfectly sane behavior.

I also agree with this.  Using reasonable judgement is a good thing
and I apologize if I gave any impression otherwise.

> Insisting on warning-free builds is not sane behavior, especially given
> just how many compilers there are out there and just how many warnings
> can be reached.

On my main two development environments it is quite easy to achieve
warning free output.  My main compiler for years was the native HP
ANSI C compiler for 64-bit HP-UX.  I almost reacted to the "There are
far more compilers in the world than just GCC" comment with "GCC?
What's that?"  I have actually found gcc to generate less warnings
about things the HP compiler would warn about.  But gcc moves forward
quite quickly and probably many of those are already improved by now.
I am not a single compiler advocate.

> Insistence on perfection is not reasonable.  Not given the staggering
> diversity of platforms, not given the staggering diversity of compilers,
> the expense of getting access to these environments, etc.

Developing with a clean build and requiring a clean build on other
systems it is built upon are two different things.  I was talking
about development.  I think it is important to keep things clean as
you go along during development.  Thinking that you will fix them all
later at some other point in time IMNHO is bad.  This is very much
like the "fixing broken windows" theory.

  http://en.wikipedia.org/wiki/Fixing_Broken_Windows

I have worked on projects with so many problems that when new ones
came along people weren't motivated to fix them.  But if the build is
clean and someone adds a new problem to it then they feel pressure to
clean it up quickly.  Valid warnings frequently point to subtle bugs
that are easily avoided but otherwise consume a large amount of debug
time.  Worse is when people don't actually fix the source of the
problem but apply a layer of workaround elsewhere because they didn't
understand the root source of a problem.

> Consider Bob's original statement, the one I was disagreeing with:
> 
> > Always build with full warnings enabled.  Always clean up
> > warnings as they are introduced.  Always keep a warning free
> > build.
> 
> I agree with the first.  I disagree with the second--some warnings are
> erroneous--and I think the third is not practical, given the number of
> different compilers and OSes in use.

Consider the statement to which I was replying:

>>> When you want to clean up the warnings (usually something done near
>>> the end of a development cycle), you simply build with stdout
>>> redirected to /dev/null when you run make a few times, and you'll
>>> see the warnings appear, because they're output to STDERR, not
>>> STDOUT.

Since you disagree with my statement challenging this then does that
mean that you agree with the strategy I was challenging?  That is,
don't check the project build warning status until you are "near the
end of a development cycle" and _then_ start to address warnings?  I
am sorry but that type of strategy triggers an immune response in me.
I strongly believe this is not a good strategy and am not shy about
stating my opinion on it.  Don't let broken windows accumulate.  Fix
broken windows as you go along so that the project is always in as
good of a state as possible all of the time.

Every time I compile on a new platform I look at the warnings
generated.  If my native compile environment isn't smart enough to to
generate a valid warning but another platform is smart and generates a
previously unknown valid warning then I am not going to ignore it.
Every new environment that my code reaches almost always teaches me
something useful about my code.

Bob




Re: Report to stdout like Linux kernel compilation does

2008-04-12 Thread Bob Proulx
Ralf Wildenhues wrote:
> For Emacs, all I know was that M-x compile did all that I ever needed.
> But I'm sure it can be extended for unusual "compiler" output as well.

For emacs use M-x compile to build.  The default compile command is
"make" but may be modified as desired.  To walk through every warning
and error use C-x ` (bound to next-error).  To restart and rescan at
the beginning give the emacs universial argument "C-u" before it.  In
the compile buffer, to go immediately to a specified warning or error
location use  on the line.

> I can't speak for others, but have heard that the popular ones have this
> feature as well.

Me too.  Additionally I find it hard to believe that popular ones
would have gained popularity without such a feature! :-)

Bob




Re: flag question

2008-04-26 Thread Bob Proulx
Thomas Dickey wrote:
> Bob Friesenhahn wrote:
> > These options only work with GCC.  If the compiler is GCC you can use 
> > them, otherwise skip them.
> 
> The Intel compiler recognizes -Wall and -Werror (but not -pedantic in
> the version I have at hand).

The Intel compiler was actively trying to be gcc-compatible so as to
win over gcc users.  Most native compilers do not support it.

Bob




Re: directory names with blanks break automake builds

2008-05-17 Thread Bob Proulx
Peter Simons wrote:
> I just ran across an interesting problem with automake 1.10. Just unpack
> any other build into a directory called, say "/tmp/test automake", and
> run it. In my particular case, the makefiles failed to call the local
> copy of install-sh because of the blank. I checked my configure.ac
> script and Makefile.am, but it does seem like there is anything I, as a
> user, could do about that -- this really seems to be a problem in the
> generated code. Has anyone else noticed this phenomenon?

This is a documented limitation.  See the following reference.

  
http://www.gnu.org/software/automake/manual/html_node/limitations-on-file-names.html#limitations-on-file-names

Using whitespace in filenames has long been an issue for Unix tools in
general because whitespace is used as an input field separator.
Therefore in the culture, filenames with whitespace are strongly
discouraged.  It just isn't a natural thing to do on a traditionally
command line based environment.  This leads to 'make' not being
designed to handle filenames with whitespace and of course automake
tries to produce portable make configurations.

Bob




Re: directory names with blanks break automake builds

2008-05-19 Thread Bob Proulx
Peter Simons wrote:
> Bob Proulx writes:
>  > This is a documented limitation.  See the following reference.
>  >
>  >   
> http://www.gnu.org/software/automake/manual/html_node/limitations-on-file-names.html#limitations-on-file-names
> 
> I am sorry, but that page doesn't seem to mention this problem at all.

Hmm...  I kept seeing "newline" and reading in my head "whitespace".
My bad.  Sorry about that.  I was definitely wrong there.

>  > Using whitespace in filenames has long been an issue for Unix tools
>  > in general because whitespace is used as an input field separator.
> 
> I am sorry, but I have to disagree. Unix tools have no trouble
> whatsoever with blanks in file names -- it's humans who have trouble
> using them because we tend to forget the necessary quoting.

The 'make' tool is a good example of a classic Unix tool and it does
indeed have a problem with whitespace in the filename.

>  > This leads to 'make' not being designed to handle filenames with
>  > whitespace and of course automake tries to produce portable make
>  > configurations.
> 
> I don't know what limitation you refer to. Can you please explain how
> make's design causes these problems?

Unix make can't handle spaces in filenames.  It always splits upon
spaces.

Bob




Re: automake distribution packages

2008-08-13 Thread Bob Proulx
[EMAIL PROTECTED] wrote:
> This package is distributed as a tar.gz with no source files; only
> binaries.

Automake is written in Perl which are simply text files.  The source
and the executables are both the same.  But if you are wishing to
modify automake then working from the version control sources is best.
The documentation for this is on the automake home page.

  http://sources.redhat.com/automake/

What distribution package are you trying to install?

> So, what is the install procedure?  Where should they be put when
> unpacking?

Here is one way to build and install automake into /usr/local/bin/.

  wget ftp://ftp.gnu.org/pub/gnu/automake/automake-1.10.1.tar.gz
  tar xzvf automake-1.10.1.tar.gz
  cd automake-1.10.1
  ./configure
  make
  make check
  make install

If your autoconf isn't new enough then before doing the configure
above install a new autoconf.

  wget ftp://ftp.gnu.org/pub/gnu/autoconf/autoconf-2.62.tar.gz
  tar xzvf autoconf-2.62.tar.gz
  cd autoconf-2.62
  ./configure
  make
  make check
  make install

Hope that helps,
Bob




Re: Install to lib64

2009-01-25 Thread Bob Proulx
Jason Sewall wrote:
> I'm maintaining an autotools-configured project, and I've noticed that
> the make install resulting from my build (on x86_64 arch, linux) puts
> generated libraries in prefix/lib instead of prefix/lib64 - is there
> something I should do differently, or is the the expected behaviour?

That is the expected behavior.  On normal/pure 64-bit systems that is
usually the normal expected location.  On my 64-bit system lib and not
lib64 is the correct location since there isn't any 32-bit version.
64-bit systems are first class citizens too.

If you need it in a specific location then you should tell it that at
configure time.  You can always set up a site specific configuration
so that it will default to a particular location for you on your
system if you desire.

The locations that the autotools default to using are the GNU standard
locations.  The GNU system is designed to be its own system and it has
its own set of standards.  This has been around a long time and
predates both POSIX and the newer FHS.  But the autotools are very
configurable and everyone is free to set up installation locations as
they see fit.

The typical procedure for packagers is to set any system specific
locations on the configure command line but to codify those locations
in the package build script.  This is usually done by package
management systems in different ways.  On RPM based systems (e.g. Red
Hat, SuSE) look at using the %configure macro which is designed to do
this for you easily.  On DPKG based systems (e.g. Debian, Ubuntu) this
is usually done in the rules file.  The syntax and interface is
standardized and well known.

Bob




Re: execvp: /bin/sh: Argument list too long

2010-11-09 Thread Bob Proulx
Pippijn van Steenhoven wrote:
> I am root on my (Linux) system and I set the stack size to unlimited. The
> libtool macro reported a few billion (or something other really large)
> for maximum argument list length, bash also agreed (it easily executed
> the "distdir" target when copied into a bash script), but make doesn't.
> Both gnu make and pmake abort with the "too long" message.

What Linux kernel version are you using?  Note that linux-2.6.23 and
later kernels have changed how this is handled.

  http://www.in-ulm.de/~mascheck/various/argmax/

  
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=b6a2fea39318e43fee84fa7b0b90d68bed92d2ba

  http://www.gnu.org/software/coreutils/faq/#Argument-list-too-long

Bob



file D left after automake configure test

2001-01-01 Thread Bob Proulx

After running configure I notice that an empty file "D" is left around
as trash.  I investigated and and found the following.  This occures
on HP-UX using the native ANSI compiler and running the latest test
automake-1.4b and the new dependency tracking code.  The configure
script runs the following snippet:

  depmode=gcc source=conftest.c object=conftest.o depfile=conftest.Po 
tmpdepfile=conftest.TPo /bin/sh config/depcomp cc -c conftest.c -o conftest.o

This occurs inside the test that prints "checking dependency style".
Expanding on this a little by using sh -x:

  depmode=gcc source=conftest.c object=conftest.o depfile=conftest.Po 
tmpdepfile=conftest.TPo /bin/sh -x config/depcomp cc -c conftest.c -o conftest.o
+ test -z gcc
+ test -z conftest.c
+ test -z conftest.o
+ depfile=conftest.Po
+ tmpdepfile=conftest.TPo
+ rm -f conftest.TPo
+ test gcc = hp
+ test gcc = dashXmstdout
+ test -z 
+ gccflag=-MD,
+ cc -c conftest.c -o conftest.o -Wp,-MD,conftest.TPo
cpp: error 3: Too many arguments to cpp.
+ stat=1
+ rm -f conftest.TPo
+ exit 1

This leaves an empty file called "D" in the current directory.  I
don't know why it is testing for depmode=gcc with the native ANSI C
compiler.  I don't have gcc in my path and so it does not find one.  I
believe that to be the root of this problem.  This should be
depmode=hp instead of depmode=gcc for this configuration.

Checking the man page for cpp on hpux and I see the following:
:  -M[makefile]   Generates makefile dependencies and sends the results
: to the file makefile.  If the argument makefile is
: omitted, the result is sent to the standard error.

So everything is making sense.  -Wp passes option to cpp and the -M
option to cpp is sending dependencies to the file "D" and then
complaining that 'conftest.TPo' is too many arguments.  But the syntax
used was gcc syntax and a completely different result was expected.
If the depmode is set to hp then things work better here.

I should dig into this some more but this is where I am stopping for
the night and wanted to report this minor problem.  This appears to be
a spurious result as to all other intents and purposes everything
configures and works well from this point on.

BTW...  I like the new dependency code!  I can now swap between HP-UX
ANSI C and Linux GCC interchangeably.

Bob Proulx




Re: Automake release

2001-05-20 Thread Bob Proulx

Random thoughts on version numbers...

> to do this with automake since I've been saying for a long time "1.5
> will do this", "1.5 will do that".  Bleah, my bad.

> Agreed.  Buuut... 1.4a-p1 seems wrong if HEAD is at 1.4c.  Worse, releasing 
> 1.4b-p1 sounds like it is related to 1.4b.  I still don't like it, and won't 
> adopt it for libtool or m4.

> In all this is an ugly situation.  I'm thinking of hacking automake to
> recognize that `1.4-p' is between 1.4 and 1.5 (so that
> AUTOMAKE_OPTIONS=1.4-p1 will work ok when 1.5 is used).

> In gnits there are only two numbers.  The third one, if it exists,
> indicates an alpha release.

If the version is 1.4-p1 then will an RPM package version be
automake-1.4-p1-1.arch.rpm?  Oops.  That is too many dashes.  Unless
the release of the package is p1, which would work exactly once.  But
if you need to do a second release that would be p2.  Oops that does
not work.  Using a '-' in the name is bad for people trying to make an
RPM package of the code.  The packager is forced to coerce the name
into something without a dash and then it does not match.

IMNHO version numbers should always be whole counting numbers.  I
think everyone is moving that direction.  I understand that many
projects have been in process for literally years and have legacy to
maintain.  With that method automated tools can understand which
version is later than another version.  Otherwise, even a human can
get confused.

[Soapbox Follows]

A terrible situation exists with some packages.  Some people use a
letter to mean a patch on top of the version.  Others use it to mean a
test beta release candidate for that version.  BIND comes to mind as
an example.  Which means sometimes it is before and sometimes it is
after and you just have to know.  I would like to pass a law against
anything that immoral.

There should be no letters in a version.  Is 8.2.3p4 a test release
for an upcoming 8.2.3 or a patch on top of 8.2.3?  It turns out this
is a test release for an upcoming 8.2.3.  But fileutils 4.0e is a test
beta release for 4.1.  Eventually all letters were consumed and 4.0.45
was the last beta release before the release of 4.1.  It would have
been more consistent to use numbers all along since it is unlikely to
run out of those.

Another problem is that some people think versions numbers are
floating point numbers.  I don't think that is a defendable position.
Especially when you have test betas with a second dot.  What floating
point number is 1.2.3?  But it means that if you release 2.50 people
will constantly get that version confused with version 2.5 which is
likely a much older version.  But if it is a floating point number
then all zeros on the end could be dropped.  Therefore I advocate
avoiding releasing versions ending in zero.  It only confuses people.
Is version 1.10 the same as version 1.1?  Probably not.  Better to
avoid 1.10 and release it as 1.11 so that there is no doubt.

Floating point numbers in general are a problem.  Sometimes people try
to reserve space with zeros.  1.02 is a bad version number.  Is that
the same as 1.2?  Did they really mean 1.0.2?  Also 5.005_03 is a
terrible way to version.  At least they are going with 5.6.1 for later
releases.  Is netscape-4.61 newer or older than netscape-4.7?  Of
course it is older just because we know it is but how do you code the
algorithm which always gets it right?

Version numbers should be whole counting numbers.  Linux 2.2.18 is a
fine version number.  Later than 2.2.14 and earlier than 2.4.4.  No
confusion there.

Bob




Re: aclocal -I

2001-06-14 Thread Bob Proulx

Lars> It would be great if one could add -I options to aclocal by just placing
Lars> macros in configure.ac, like:
Lars> AC_CONFIG_MACRO_DIR(conf/m4macros)

I would like that too.

Tom> You can use ACLOCAL_AMFLAGS (bad name, I know) in Makefile.am.

Uh, like this?:

  ACLOCAL_AMFLAGS = -I config

I do see that in the genated Makefile{,in}.  That helps.

But then I am confused (easily in this case).  I thought that aclocal
should be run before running autoheader before running automake before
runing autoconf.

Paraphrasing the goat book's apendix diagrams with text I see the
following relationship.  Just the good parts.

  configure.in > (aclocal) > aclocal.m4
  configure.in, aclocal.m4 > (autoheader) > config.h.in
  configure.in, Makefile.am > (automake) > Makefile.in
  configure.in, aclocal.m4 > (autoconf) > configure
  Makefile.in, config.in > (configure) > Makefile

Therefore shouldn't I run the following sequence to bootstrap the
project from nothing?  This order comes from page 70 of the goat book
which I know describes an older version of the programs.

  aclocal -I config \
  && autoheader \
  && automake --add-missing --copy --include-deps \
  && autoconf

Which would mean that adding this ACLOCAL_AMFLAGS to Makefile.am is
too late to be used for the first running of aclocal.  But since I am
sure that things have changed due to progress, may I ask what is the
correct ordering to bootstrap a project from first sources and if
possible why so that I may understand?

And as long as we are on the subject of placing options into the
files, would the following be the proper way to move those options
into the Makefile.am?

  AUTOMAKE_OPTIONS = add-missing copy include-deps

Thanks
Bob




Re: spam hell

2002-01-30 Thread Bob Proulx

> It is possible to set up moderation.  Actually, moderation is a wrong 
> word.  The only responsibility of the moderator show be preventing spam.

I help moderate a few of the GNU lists which are moderated solely to
prevent spam and I can tell you from personal experience that it is a
pain.  Mailman has only the lame web interface.  Oh I would love an
email interface!  Then I could apply some real filtering such as
spamassassin to it.

> If mailman could just send copies of held messages to me (and maybe some
> others for vacation overlaps), and the mails I send replies on were posted
> to the list while the mails I don't reply to were discarded after a week,
> I could be willing to do something like that...

You are making drool with desire for such a thing.

Bob




files left after distclean: How to clean those?

2002-08-19 Thread Bob Proulx

I am wanting to use help2man to produce the man page for a program.  I
have a Makefile.am with the following.

  dist_man_MANS = example.8

  example.8: src/example
 help2man --output=example.8 ./src/example

But when I run 'make distcheck' I get the following error.

  Error: files left after distclean

I am using:

  autoconf (GNU Autoconf) 2.53
  automake (GNU automake) 1.5

I think I understand what is happening and why.  But even after
reading the docs (specifically the sections "What Gets Cleaned" and
"When Automake Isn't Enough") but I am still at a loss as to the
correct way to proceed.  How do I remove the file during a clean?

[Aside: Since this is a normal file to build I would expect the normal
clean to remove it.  But it requires help2man as a build dependency
which may not exist on a developer system.  So perhaps it should be
maintainer-clean-hook.  Hmm...  But if they are a developer then I
will assume them to have all of the development tools available.
Let's not worry for now.  Let's just make it work with either.]

I will be explicit.  Should I put in a distclean-hook: target that
removes this file?  Of course this does not work.  I can't get the
hook to be called by 'make distcheck'.

  distclean-hook:
rm -f example.8

What is the normal way to handle using help2man to create man pages?
(I am just going to avoid creating man pages for the moment.)

Thanks
Bob




Re: files left after distclean: How to clean those?

2002-08-19 Thread Bob Proulx

Bruce Korb <[EMAIL PROTECTED]> [2002-08-19 13:51:40 -0700]:
> Do you distribute example.8?

I could go either way.  Let's say no.

> If not, add it to the DISTCLEANFILES,

That worked!  Thanks!

So now my Makefile.am looks like this.  Anything flagrantly wrong?
Otherwise this is working for me.

Thanks
Bob

SUBDIRS = src

EXTRA_DIST = \
  README \
  depcomp \
  example.spec \
  example.spec.in

MAINTAINERCLEANFILES = \
  aclocal.m4 \
  configure \
  depcomp \
  example.spec

DISTCLEANFILES = example.8

info_TEXINFOS = example.texi

dist_man_MANS = example.8

example.8: src/example
 help2man --output=example.8 ./src/example