Re: test setup advice

2006-03-28 Thread Ralf Wildenhues
* Ralf Wildenhues wrote on Tue, Mar 28, 2006 at 09:59:26AM CEST:

> TESTS = $(check_PROGRAMS)
> check_PROGRAMS = tests/a$(EXEEXT) tests/b$(EXEEXT) tests/c$(EXEEXT)

I should add that the $(EXEEXT) are not necessary with CVS Automake any
more.  They were never necessary for *_PROGRAMS, but for TESTS until the
recent fix they would have been necessary for cross compiles to w32 but
typically not for native w32 builds.

So you can just omit them if you like.

Cheers,
Ralf




Re: BUILT_SOURCES: dependencies not copied to build_dir

2006-03-28 Thread Ralf Wildenhues
Hi Michael, Tom,

* tom fogal wrote on Mon, Mar 27, 2006 at 10:38:27PM CEST:
>  <[EMAIL PROTECTED]>Michael Biebl writes:
> 
> >ngcs_marshal.c: ngcs_marshal.ngci idef.py
> >$(srcdir)/idef.py ngcs_marshal
> >
> >ngcs_marshal.h: ngcs_marshal.ngci idef.py
> >$(srcdir)/idef.py ngcs_marshal
> 
> Not sure if it will work, but perhaps you could just use the .ngci file
> without copying it?

Yes, that was my thinking as well.  Copying is a bad idea, if that file
ends up in the distribution tarball some make implementations will
behave "interestingly" and subtly different when both source and build
tree have different versions of this (described in the Autoconf manual).

>  Perhaps rules like:
> 
> ngcs_marshal.c: $(srcdir)/ngcs_marshal.ngci $(srcdir)/idef.py
> $(srcdir)/idef.py ngcs_marshal
> 
> ngcs_marshal.h: $(srcdir)/ngcs_marshal.ngci $(srcdir)/idef.py
> $(srcdir)/idef.py ngcs_marshal
> 
> would work?

No, the rules are fine without the `$(srcdir)/', thanks to the VPATH
feature that most make implementations offer.

> Or do the files need to be copied for other reasons as
> well?

I suppose the idef.py program wants to know about their location.
But since we don't know what it does, nor whether it can be told
(maybe by a command line argument) where to look for its input,
Michael has to find this out by himself, or enlighten us.  :-)

Cheers,
Ralf




Re: BUILT_SOURCES: dependencies not copied to build_dir

2006-03-28 Thread Michael Biebl
Ralf Wildenhues wrote:
> Hi Michael, Tom,

Hi,

thanks for you help so far.

> * tom fogal wrote on Mon, Mar 27, 2006 at 10:38:27PM CEST:
>>  <[EMAIL PROTECTED]>Michael Biebl writes:
>> 
>>> ngcs_marshal.c: ngcs_marshal.ngci idef.py
>>>$(srcdir)/idef.py ngcs_marshal
>>>
>>> ngcs_marshal.h: ngcs_marshal.ngci idef.py
>>>$(srcdir)/idef.py ngcs_marshal
>> Not sure if it will work, but perhaps you could just use the .ngci file
>> without copying it?
> 
> Yes, that was my thinking as well.  Copying is a bad idea, if that file
> ends up in the distribution tarball some make implementations will
> behave "interestingly" and subtly different when both source and build
> tree have different versions of this (described in the Autoconf manual).

This is the solution I came up with:

EXTRA_DIST = ngcs_marshal.ngci idef.py ngcs.py
CLEANFILES = ngcs_marshal.h ngcs_marshal.c

ngcs_marshal.c: ngcs_marshal.ngci idef.py
$(srcdir)/idef.py $(srcdir)/ngcs_marshal @

ngcs_marshal.h: ngcs_marshal.ngci idef.py
$(srcdir)/idef.py $(srcdir)/ngcs_marshal @

This way idef.py and ngcs_marshal.ngci are not copied to the builddir
and whenever they change one of them in the srcdir the header and c file
are regenerated.
I don't need srcdir in the dependency line, but if I omit them in the
build rule I either get:
make: idef.py: Command not found
or
IOError: [Errno 2] No such file or directory: 'ngcs_marshal.ngci'

I also had to change idef.py to take an additional parameter (@) where I
can tell the location of the output file. Otherwise the file is created
in the srcdir. I wanted to avoid changing idef.py first (that's why the
copying) but it seemed not to be possible.

Cheers,
Michael
-- 
Why is it that all of the instruments seeking intelligent life in the
universe are pointed away from Earth?



signature.asc
Description: OpenPGP digital signature


Re: BUILT_SOURCES: dependencies not copied to build_dir

2006-03-28 Thread Ralf Wildenhues
Hi Michael,

* Michael Biebl wrote on Tue, Mar 28, 2006 at 12:16:18PM CEST:
> This is the solution I came up with:
> 
> EXTRA_DIST = ngcs_marshal.ngci idef.py ngcs.py
> CLEANFILES = ngcs_marshal.h ngcs_marshal.c
> 
> ngcs_marshal.c: ngcs_marshal.ngci idef.py
> $(srcdir)/idef.py $(srcdir)/ngcs_marshal @

Do you mean `$@' instead of `@' here?  If not: what is the `@' supposed
to do?

> ngcs_marshal.h: ngcs_marshal.ngci idef.py
> $(srcdir)/idef.py $(srcdir)/ngcs_marshal @
> 
> This way idef.py and ngcs_marshal.ngci are not copied to the builddir
> and whenever they change one of them in the srcdir the header and c file
> are regenerated.
> I don't need srcdir in the dependency line, but if I omit them in the
> build rule I either get:
[errors]

Well, yes, you do need it in the rule.  Were they inference rules, could
you use `$<', but in an explicit rule like above that is not portable.

> I also had to change idef.py to take an additional parameter (@) where I
> can tell the location of the output file. Otherwise the file is created
> in the srcdir. I wanted to avoid changing idef.py first (that's why the
> copying) but it seemed not to be possible.

That's definitely the better way.

Cheers,
Ralf




Re: BUILT_SOURCES: dependencies not copied to build_dir

2006-03-28 Thread Michael Biebl
Ralf Wildenhues wrote:
> Hi Michael,
> 
> * Michael Biebl wrote on Tue, Mar 28, 2006 at 12:16:18PM CEST:
>> This is the solution I came up with:
>>
>> EXTRA_DIST = ngcs_marshal.ngci idef.py ngcs.py
>> CLEANFILES = ngcs_marshal.h ngcs_marshal.c
>>
>> ngcs_marshal.c: ngcs_marshal.ngci idef.py
>> $(srcdir)/idef.py $(srcdir)/ngcs_marshal @
> 
> Do you mean `$@' instead of `@' here?  If not: what is the `@' supposed
> to do?
> 

Sure, I meant '$@'. Thanks for the catch ;-)

Michael

-- 
Why is it that all of the instruments seeking intelligent life in the
universe are pointed away from Earth?



signature.asc
Description: OpenPGP digital signature


Re: BUILT_SOURCES: dependencies not copied to build_dir

2006-03-28 Thread Stepan Kasal
Hello,

On Tue, Mar 28, 2006 at 12:16:18PM +0200, Michael Biebl meant to write:
> EXTRA_DIST = ngcs_marshal.ngci idef.py ngcs.py
> CLEANFILES = ngcs_marshal.h ngcs_marshal.c
> 
> ngcs_marshal.c: ngcs_marshal.ngci idef.py
> $(srcdir)/idef.py $(srcdir)/ngcs_marshal $@
> 
> ngcs_marshal.h: ngcs_marshal.ngci idef.py
> $(srcdir)/idef.py $(srcdir)/ngcs_marshal $@

yes, this is a good solution.

One question, though?  Does `idef.py ... *.c' produce both files, or only
the .c one?

If it produces only one of the files, the makefile is correct.

If it produces both of them, it may not work with parallel make.
(See http://sourceware.org/automake/automake.html#Multiple-Outputs .)

Alternatively, you might make use of the Automake's ability to use new
extensions, something like:

EXTRA_DIST = idef.py ngcs.py
foobar_SOURCES = ngcs_marshal.ngci \
this.c \
that.c ...

.ngci.c:
$(srcdir)/idef.py $< $@

.ngci.h:
$(srcdir)/idef.py $< $@

If the Python script produces both output files, the latter rule should be
something like:

ngcs_marshal.h: ngcs_marshal.c
@if test -f $@; then :; else \
  rm -f data.c; \
  $(MAKE) $(AM_MAKEFLAGS) data.c; \
fi

Have a nice day,
Stepan




Re: output directory of generated files

2006-03-28 Thread Stepan Kasal
Hello,

On Thu, Mar 09, 2006 at 02:48:13PM +0100, Thomas Porschberg wrote:
> %.qm: %.ts
> $(PROG1) $<
> 
> foo.h: foo.qm
> $(top_srcdir)/utils/PROG2 < $< > $@
...
> PROG2 expected foo.qm now in BUILDDIR/src and not under project/src/.
> (surprisingly it worked when I started make a second time ?!)

It seems that on the second run make was able to find the .qm file using
the VPATH feature.  But during the first attempt, the file wasn't there at
the startup and make didn't expect that the rule will create it there.

> I change the rule now to:
> 
> %.qm: %.ts
> $(PROG1) $< -qm $@
> 
> Now foo.qm is created in BUILDDIR/src and utils/PROG2 has no problem.

It's always better when you create non-distributed files in builddir.

There are two tiny problems, though:

1) you should be sure that src/foo.qm doesn't exist; if it existed, some
make implementations would update it, instead of creating a new one in
BUILDDIR/src  (see ``Limitations of Make'' in the Autoconf manual).

2) the rule is more portable this way:

.ts.qm:
$(PROG1) $< -qm $@

Have a nice day,
Stepan Kasal




Re: output directory of generated files

2006-03-28 Thread Thomas Porschberg
Am Tue, 28 Mar 2006 15:36:36 +0200
schrieb Stepan Kasal <[EMAIL PROTECTED]>:

> Hello,
> 
> On Thu, Mar 09, 2006 at 02:48:13PM +0100, Thomas Porschberg wrote:
> > %.qm: %.ts
> > $(PROG1) $<
> > 
> > foo.h: foo.qm
> > $(top_srcdir)/utils/PROG2 < $< > $@
> ...
> > PROG2 expected foo.qm now in BUILDDIR/src and not under
> > project/src/. (surprisingly it worked when I started make a second
> > time ?!)
> 
> It seems that on the second run make was able to find the .qm file
> using the VPATH feature.  But during the first attempt, the file
> wasn't there at the startup and make didn't expect that the rule will
> create it there.
> 
> > I change the rule now to:
> > 
> > %.qm: %.ts
> > $(PROG1) $< -qm $@
> > 
> > Now foo.qm is created in BUILDDIR/src and utils/PROG2 has no
> > problem.
> 
> It's always better when you create non-distributed files in builddir.

OK, so I was not barking up the wrong tree.

> 
> There are two tiny problems, though:
> 
> 1) you should be sure that src/foo.qm doesn't exist; if it existed,
> some make implementations would update it, instead of creating a new
> one in BUILDDIR/src  (see ``Limitations of Make'' in the Autoconf
> manual).
> 
> 2) the rule is more portable this way:
> 
> .ts.qm:
>   $(PROG1) $< -qm $@
> 
Thank you for the hints. I will take it into account.

Thomas

-- 





Re: test setup advice

2006-03-28 Thread tom fogal
 <[EMAIL PROTECTED]>Ralf Wildenhues writes:
>Hi Tom,

Hi!

>* tom fogal wrote on Mon, Mar 27, 2006 at 11:17:51PM CEST:

>> My /tests directory then uses that same `everything' library to link
>> all of the test programs against.  However the build trees for /src and
>> /tests are separate; the toplevel Makefile.am uses SUBDIRS to descend
>> into each of them.  The advantage of this whole setup is that adding a
>> test is very simple:
>
>Just FYI: if you used a non-recursive setup, all you'd need to change
>would be (don't forget subdir-objects!):

Well... I sort-of did ;).  As it turns out /src is a large tree which
is all built non-recursively, and /tests similarly (although the latter
isn't as large).  Its only recursive to jump into those two
subdirectories.
I'm not exactly sure why I did things that way though, and now that you
bring it to my attention I can't think of a good reason why I would
even want those two stages of the build separated out like they are.

>> 1) write the test into e.g. "test.cpp"
>> 2) in /tests/Makefile.am, append "test" to a variable I have
>>defined.
>
>In Makefile.am, append "tests/test" instead.
>
>> 3) in /tests/Makefile.am, add a "test_SOURCES=test.cpp" line.
>
>In Makefile.am, add a "tests_tests_SOURCES = test.cpp" line.
>
>> Unfortunately this lacks a lot of dependency information.  My new
>> "test" depends on the `everything' library, which is correct, but it
>> *really* only depends on one or two files from my /src tree.
>
>So I assume you have "LDADD = ../src/libeverything.a".

Yes, this is correct.


>> The solution, of course, is to drop the `everything' library and
>> explicitly code in the dependencies on particular files for every
>> individual test.  This is going to cause /tests/Makefile.am to become
>> very large and intricate though, and thus I believe adding a test will
>> be a very error prone activity.
>
>You could change incrementally, you could use more than one convenience
>library to have more fine-grained control, and you could even list
>sources explicitly.
>
>Let me show you an example with C code:

>Here, by default, all programs will be linked against libeverything.
>That is overridden for tests/b and tests/c: the former only uses s2.c,
>while the latter only uses libsomething.
>
>The respective nonrecursive setup looks like this:
>$ cat Makefile.am
>SUBDIRS = src tests
>$ cat src/Makefile.am
>noinst_LIBRARIES = libeverything.a libsomething.a
>libeverything_a_SOURCES = s1.c s2.c
>libsomething_a_SOURCES = s2.c
>$ cat tests/Makefile.am
>TESTS = $(check_PROGRAMS)
>check_PROGRAMS = a$(EXEEXT) b$(EXEEXT) c$(EXEEXT)
>a_SOURCES = a.c
>b_SOURCES = b.c ../src/s1.c
>c_SOURCES = c.c
>LDADD = ../src/libeverything.a
>b_LDADD =
>c_LDADD = ../src/libsomething.a

Hrm, the multiple libraries idea looks like a fairly good compromise
between having each of {a,b,c,...,zz}_SOURCES list its set of dependant
source files and my current `solution' (listing practically nothing).

>(and obviously you need to adjust the list of files in AC_CONFIG_FILES
>in configure.ac, too).  Does that help?

Its an order of magnitude better than what I've got now certainly.
What I'd really like to see is a program which parses tests/*cpp and
generates
   test1_SOURCES=../src/abc.cpp ../src/x.cpp
test2_SOURCES=../src/blah.cpp
test3_SOURCES=(large 30 line list perhaps, for huge test)
Which I could then include into the appropriate part of my
(auto)Makefile.  Doing the above is really ideal, I just think doing it
manually will cause me to screw it up constantly.

Multiple libraries, perhaps one per program module, seems like a
reasonable alternative thats human-maintainable and gets most of the
benefits of doing the full source listings.

Thanks!

-tom




Re: Partial linking with _RELOCATABLES - Proposed enhancement (revised and commented)

2006-03-28 Thread Marc Alff


Hi Alexandre and All,


Alexandre Duret-Lutz wrote:


I didn't know "ld -r".  How portable is it?
 


The command might be named differently per architecture/OS,
but I the concept exist on all platforms that I know of (called 
incremental linking, or partial linking).

"ld -r" as is can be found on Unixes (not only Linux).
It's very popular with embedded systems, and has been around for a while.

libtool already implements this feature, so I have to guess it's 
portable enough :)



Will your proposal allow the creation of *.a, *.so, and
binaries out of relocatable objects?  I'm wondering, because the
*.o should be compiled differently if they are meant to be part
of a shared library, and the _RELOCATABLES syntax doesn't
indicate where the object will be used.
 


You mean -fPIC and all these ? Thanks a lot for pointing that out.

My initial thought was to use only one syntax, and rely on someone to 
pass the correct CFLAGS
to the compiler (with -fPIC and the like or not), but that is old school 
and pushes the platform

gory details back to the package maintainers.

Instead, there is a much better way :

Looking at how libtool works, it creates two explicitly different syntaxes
( *.o and *.a for static code, *.lo and *.la for code that can be shared),
and in fact injects the "-fPIC" etc compiling/linking options at the 
last minute,
based on the platform, so that platform dependent options are pushed 
away from the maintainer

and into libtool.

I propose then to match this syntax, and leverage libtool to just do that :

noinst_RELOCATABLE = parts.o
parts_o_SOURCES = part1.c part2.c

will then compile the code to generate "plain" part1.o and part2.o,
and link them together in parts.o, which is suitable to be used later :
- in another *.o relocatable,
- in a static *.a library
- in an executable

noinst_RELOCATABLE = parts.lo
parts_lo_SOURCES = part1.c part2.c

will then compile the code with the magic compiling options to generate 
shared part1.lo and part2.lo,

and link them together in parts.lo, which is suitable to be used later :
- in another shared *.lo relocatable,
- in a shared *.la library
- in an executable (if it works, to be verified)

MA> The difference between "glued.o" and "libglued.a" is that a 
MA> "relocatable" object
MA> contains all the code in one block, with internal link dependencies 
MA> already resolved,


Just to make sure I understand: would I get a similar result if
instead of gluing these objects I had compiled the concatenation
of all their sources in a single run of the compiler?
 


Yes, pretty much.


(Consider that question from the point of view of someone who is
used to split static libraries in as much objects as possible so
the linker picks only what it needs and not more.)
 


Before being burned as an Heretic by an angry mob, let me clarify a bit :

- I am not suggesting that everyone should stop using *.a or *.la, and 
link in this (by definition more bulky) way.
If someone takes care to split code into small objects (ideally, up to 
one object per function,
so I can link with cos() if I use it, but don't get sin() if I don't 
need it), to reduce the final binary

side to the minimum, by all means, continue to do so !

- However, there are cases where what the linker thinks is minimal and 
satisfies all the "obvious" dependencies
is in fact too small and misses some code that should have been there 
(back to the auto-register-plugin example),

which is when partial linking can become useful (or, the only way).

- I view the _RELOCATABLES proposal as just another tool, it's still up 
to the package maintainer
to use the best suited tool for the problem at hand (I just happen to 
have an unusual problem).



[...]

MA> The main concern is that, how the source code is organized into directories
MA> with recursive makefiles is an implementation choice,
MA> it should **not** impact what the final deliverable looks like (maybe I 
MA> still want ONE *.a or *.so or binary at the end,

MA> not expose a collection of *.a or *.so to my users).

Of your three points, this is the only one I do not understand.
If you use Libtool convenience libraries you'll have a
collection of *.la in your build tree for all the subparts of
your project, but then you'll gather all these in a big *.la
library which you will install.  So the final deliverable is
still one library, not a collection of libraries.
(Example in the "Libtool Convenience Libraries" section of the
Automake manual.)
 

Well, I just learned that libtool can help with a lot of problems that I 
had no idea were solved already.
What I described was basically using *.o as convenience, and was based 
on previous experience
on a very old project (as in 1992 on an embedded project, it was the 
only solution then).
This point is gone then. Note that the issue of forcing to link code 
that is not referenced explicitly

still exist.


MA>  Part III : Proposed enhancement 

MA> In short, Makefile.am can be wr

Re: test setup advice

2006-03-28 Thread Ralf Wildenhues
Hi Tom,

* tom fogal wrote on Wed, Mar 29, 2006 at 05:20:18AM CEST:
>  Ralf Wildenhues writes:
> >Does that help?
> 
> Its an order of magnitude better than what I've got now certainly.
> What I'd really like to see is a program which parses tests/*cpp and
> generates
>test1_SOURCES=../src/abc.cpp ../src/x.cpp
>   test2_SOURCES=../src/blah.cpp
>   test3_SOURCES=(large 30 line list perhaps, for huge test)

Nothing (in Automake) prevents you from having the sources of two
implementations of the same interface in the same directory, so you
would have to write this program yourself, which then makes use of
additional assumptions true for your code to find this out.

> Multiple libraries, perhaps one per program module, seems like a
> reasonable alternative thats human-maintainable and gets most of the
> benefits of doing the full source listings.

FWIW, my experience is that with C++ code this isn't all too effective,
with most of the code being packed into templated headers, changes will
usually require to recompile large parts of the code.

Cheers,
Ralf




Re: test setup advice

2006-03-28 Thread Marc Alff


Hi Tom,


tom fogal wrote:


I'm looking for advice on how to structure a build system using
automake.

Thanks for any solutions / ideas / comments.

-tom
 



These are just random thoughts / ideas that I used on C++ projects,
just my two cents (and in all cases, my personal opinion, your mileage 
may vary).
It's not automake specific and more high level, but I implemented these 
with automake and it worked well.


Any developer will use an excuse that tests are slow to build / 
difficult to maintain to actually
not maintain them, so being sensitive to that is very healthy. However, 
it's not limited to tests
alone but true for the entire code base, so having code that builds fast 
and lean improves development

in general.
To achieve that, a couple of techniques do work in C++ :
- try to limit the #includes dependency hell. Sometimes a forward (class 
foo;) is much better that actually

including foo.h
- avoid including /usr/include/xxx.h (C stdio, C++ stream, etc)  in 
header files, include that in .cpp files instead.
- once in a while review the dependencies generated by automake on 
header files : it's very instructive
and shows you what the compiler has to deal with. I personally prefer to 
cut dependencies rather that using

pre-compiled headers of other things like that.
- templates are powerful, but can become quickly hell when abused.

Tests do influence design : for the production code itself, the better 
it's designed, the easier it is to test.
Writing tests at the same time of the production code actually 
influences the code to a better karma.
Writing tests after the facts on spaghetti code is just damage control, 
not to mention it's very difficult.


Separating "production" and "test" code in different directories is very 
healthy, for many reasons :
- all the test code can be instantly unplugged to make sure that the 
production code builds without using
test code while compiling against dummy header files, or worst linking 
against test stubs.
- porting your code to a different platform can be done for the 
production code alone if needed,

in case the test code uses tools not available everywhere.
- looking from the CVS point of view, someone checking in a change in 
production code makes me more
nervous that someone checking in a new test, and having a simple way to 
find out which code is which

really helps.

On test themselves, I make the distinction between :
- test drivers (code that you actually have to write for a new family of 
test cases),
- test cases (just a description of the test itself, might not be C++ 
code but just a text or xml file).


The idea is to write a test driver that reads an external script that 
tells it what test to execute,
or what function to call with what value. This code is typically more 
complex to write, but is written once.

A test case (typically a script) is when written for each test.
Adding a test case does not involves any build, and can be done at any time.
It's also very useful for debugging, since anyone (including non 
programmers) can write a new case at anytime

(provided of course that it uses the feature of existing drivers).

If each test consists of code then yes, it takes time to build, and 
effort to maintain.
If each test case consist of a script that talks to a test driver, 
building is very fast,

maintenance can be a breeze of hell, depending on the design.
A word of warning, good planning is the key there.

In general, I prefer "small" test binaries and not linking to 
everything.a, so that once the test

gives the correct result, the test driver can be leveraged to do :
- performance profiling (gprof),
- code coverage (tcov)
- or dynamic analysis (purify like tools).
The idea is that since someone made an effort to write it (it's 
automated in the makefile, right ?),

it might as well be used to it's full value.

tcov is very important, since a good test suite will try to improve 
coverage,

not beat the same function to death while ignoring all the others,
and it's very hard to access coverage without a tool (take a wild guess, 
divide by 10, that's the real coverage).


That's all for now, sorry if it's a bit off list.

I hope this helps with ideas.

Cheers,
Marc Alff.