Re: make check failures

2011-05-24 Thread Peter Rosin
Den 2011-05-24 07:43 skrev Graham Reitz:
> After a successful build, make check yields:
> 
> =
> 628 of 650 tests failed
> (79 tests were not run)
> See tests/test-suite.log
> Please report to bug-autom...@gnu.org
> =
> make[3]: *** [test-suite.log] Error 1
> make[2]: *** [check-TESTS] Error 2
> make[1]: *** [check-am] Error 2
> make: *** [check-recursive] Error 1
> 
> 
> Configure:
> ./../../gnu/automake/automake-1.11/configure --prefix=$HOME/root/usr/local 
> AUTOCONF=$HOMEt/root/usr/local/bin/autoconf 
> AUTOM4TE=$HOMEt/root/usr/local/bin/autom4te

*snip*

Hi Graham,

Did you copy-paste that line?
Or, to get to the point, are the extra "t":s in "...=$HOMEt/root..." a typo
or are those the reason for the failures?

Cheers,
Peter



Re: autoconf + automake support for MSVC

2011-09-02 Thread Peter Rosin
Den 2011-09-02 23:11 skrev Bruno Haible:
> Ralf Wildenhues wrote in
> :
 Windows+MSVC. I know this is not a gnulib target.
>>>
>>> Yes. But it could become a gnulib target if the $CC wrapper script was 
>>> agreed
>>> upon in GNU. For example, if Automake would distribute it, like it 
>>> distributes
>>> a couple of other wrapper scripts.
>>
>> There is a branch in the Automake git repo which has that.
>> Unfortunately, my lack of time has contributed to it not being actively
>> merged and maintained.
> 
> This branch contains an 'ar' emulation (file 'ar-lib'). Good.
> But how is the file 'compile' meant to be used? CC="compile cl"
> or how? In other words, if I set CC="cl", how are autoconf tests
> that run the compiler meant to be performed?
> 
> People routinely use a program called 'cccl' in
>   $ ./configure CC=cccl
> See [1].
> 
> Will that functionality be moved from 'compile' to a script like
> 'cccl' at some point?

The web has a couple of different forked versions of cccl. What is
the "upstream" for that script? There is ancient support for some
version of cccl in libtool, but I didn't get it to work with any
version of cccl that I found (some things might have worked, but
there were loads of failures for simple things in the libtool
testsuites. I have forgotten the details so don't ask...). So, I
didn't want to create yet another fork of cccl, and instead fixed
the 'compile' script in Automake to handle the bits that must be
handled by the build tools (and taught libtool to deal with cl
natively). What I didn't do was add all sorts of options to
'compile' to make cl look like gcc, because that way lies madness.
Packages aiming for portability shouldn't assume gcc in the first
place. End result, you have to feed optimization options and such
in the way cl expects them.

There is also the 'ar-lib' script that does the bare minimum to
present a posixy archiver interface (albeit incomplete) to Microsoft
lib, in a similar manner that 'compile' makes cl understand the
-l and -L options (and a few others).

All in all, I regularly use these scripts to build packages with
cl, and the configure.ac/Makefile.am of those packages need not
do special things to have it all work.

A typical configure invocation might be (from an MSYS shell):

.../configure CC="cl -nologo" CFLAGS=-MD LD=link NM="dumpbin -symbols" \
  AR="/home/peda/automake/lib/ar-lib lib" STRIP=: RANLIB=: \
  --enable-dependency-tracking

configure.ac needs to have AM_PROG_CC_C_O which might not be one of
the most common macros, but that's certainly not too bad. And when
the mentioned branch in Automake git is merged I imagine there will be
a macro named something like AM_PROG_AR that will make it sufficient
to say AR=lib to configure, instead of pointing at 'ar-lib' explicitly.

I don't think cccl is the future, I see it as the past. It's
simply not needed when the needed bits are already in 'compile'.

It's a bit sad to see all the effort going into writing private
scripts wrapping cl into something that looks like gcc, when the
effort could be spent making autotools just work instead.

Cheers,
Peter



Re: autoconf + automake support for MSVC

2011-09-03 Thread Peter Rosin
Den 2011-09-03 00:43 skrev Michael Goffioul:
> On Fri, Sep 2, 2011 at 11:26 PM, Peter Rosin  wrote:
>> The web has a couple of different forked versions of cccl. What is
>> the "upstream" for that script? There is ancient support for some
>> version of cccl in libtool, but I didn't get it to work with any
>> version of cccl that I found (some things might have worked, but
>> there were loads of failures for simple things in the libtool
>> testsuites. I have forgotten the details so don't ask...). So, I
>> didn't want to create yet another fork of cccl, and instead fixed
>> the 'compile' script in Automake to handle the bits that must be
>> handled by the build tools (and taught libtool to deal with cl
>> natively). What I didn't do was add all sorts of options to
>> 'compile' to make cl look like gcc, because that way lies madness.
>> Packages aiming for portability shouldn't assume gcc in the first
>> place. End result, you have to feed optimization options and such
>> in the way cl expects them.
>>
>> There is also the 'ar-lib' script that does the bare minimum to
>> present a posixy archiver interface (albeit incomplete) to Microsoft
>> lib, in a similar manner that 'compile' makes cl understand the
>> -l and -L options (and a few others).
>>
>> All in all, I regularly use these scripts to build packages with
>> cl, and the configure.ac/Makefile.am of those packages need not
>> do special things to have it all work.
>>
>> A typical configure invocation might be (from an MSYS shell):
>>
>> .../configure CC="cl -nologo" CFLAGS=-MD LD=link NM="dumpbin -symbols" \
>>  AR="/home/peda/automake/lib/ar-lib lib" STRIP=: RANLIB=: \
>>  --enable-dependency-tracking
>>
>> configure.ac needs to have AM_PROG_CC_C_O which might not be one of
>> the most common macros, but that's certainly not too bad. And when
>> the mentioned branch in Automake git is merged I imagine there will be
>> a macro named something like AM_PROG_AR that will make it sufficient
>> to say AR=lib to configure, instead of pointing at 'ar-lib' explicitly.
>>
>> I don't think cccl is the future, I see it as the past. It's
>> simply not needed when the needed bits are already in 'compile'.
>>
>> It's a bit sad to see all the effort going into writing private
>> scripts wrapping cl into something that looks like gcc, when the
>> effort could be spent making autotools just work instead.
> 
> I'd be happy to give those scripts a try when I get some time. I guess I have 
> to
> download/install autoconf/automake/libtool from git. Anything else?

Libtool 2.4 should do, and (I think) you only need the 'compile' and
'ar-lib' scripts from automake-git-master to try them.

> When you want to compile a package, I guess you then have to rerun
> autoconf/automake to get updated compilation scripts?

That's one way, but you can also just point to a new enough compile/ar-lib

> When configure.ac does not contain then AM_PROG_CC_C_O macro,
> what do you do? Add it manually? For instance I checked PCRE code
> (http://vcs.pcre.org/viewvc/code/trunk/configure.ac?revision=666&view=markup)
> and couldn't find that macro.

In that case, as stated above, you can just use compile/ar-lib as you'd
use cccl, the macros only trigger the use of the scripts when they are
needed (and the inclusion of the scripts in the package). If you know
that you need them it's not wrong to point to them from the start (as I
did for AR in above example configure invocation).

Cheers,
Peter



Re: autoconf + automake support for MSVC

2011-09-03 Thread Peter Rosin
Den 2011-09-03 03:47 skrev Bruno Haible:
> Peter Rosin wrote:
>> I didn't want to create yet another fork of cccl, and instead fixed
>> the 'compile' script in Automake to handle the bits that must be
>> handled by the build tools (and taught libtool to deal with cl
>> natively). What I didn't do was add all sorts of options to
>> 'compile' to make cl look like gcc, because that way lies madness.
>> ...
>> 'compile' makes cl understand the
>> -l and -L options (and a few others).
> 
> So, if I understand it right, you *don't* want to assume that $CC
> understands -l and -L options, like the C compiler in POSIX does for
> ages (cf.
> <http://pubs.opengroup.org/onlinepubs/9699919799/utilities/c99.html>).

I think you misunderstand, AM_PROG_CC_C_O clobbers $CC and prepends
/path/to/package/build-aux/compile if $CC does not support
-c and -o at the same time, something cl does not (the function
as such is there, but you say -Fo instead of -o ). If
AM_PROG_CC_C_O is not used in configure.ac (and hence, no 'compile'
script inside the package) you can say CC="/path/to/other/compile cl"
explicitly instead.

> And since Autoconf scripts invoke $CC directly and not 'build-aux/compile',
> all Autoconf macros that use -l or -L need to be adapted, so that they
> handle 'cl' directly.

Since $CC is updated by AM_PROG_CC_C_O your assumption does not hold.

> Can you please send patches to support 'cl' for these uses in gnulib?

No need for that, as I think all of the below is handled for cl by a
current 'compile' script.

> getaddrinfo.m4:46:  LIBS="$LIBS -lws2_32"
> getaddrinfo.m4:56:  GETADDRINFO_LIB="-lws2_32"
> gethostname.m4:24:   LIBS="$LIBS -lws2_32"
> gethostname.m4:34:  GETHOSTNAME_LIB="-lws2_32"
> gettext.m4:268:LIBS=`echo " $LIBS " | sed -e 's/ -lintl / /' -e 's/^ 
> //' -e 's/ $//'`
> hostent.m4:30: LIBS="$LIBS -lws2_32"
> hostent.m4:44:HOSTENT_LIB="-lws2_32"
> lib-link.m4:85:  *" -l"*) LIBS="$LIBS $LIB[]NAME" ;;
> lib-link.m4:322:-L*)
> lib-link.m4:373:LTLIB[]NAME="${LTLIB[]NAME}${LTLIB[]NAME:+ 
> }-L$found_dir -l$name"
> lib-link.m4:460:LIB[]NAME="${LIB[]NAME}${LIB[]NAME:+ 
> }-L$found_dir -l$name"
> lib-link.m4:529:  -L*)
> lib-link.m4:610:  -l*)
> lib-link.m4:730:-L) next=yes ;;
> lib-link.m4:731:-L*) dir=`echo "X$opt" | sed -e 's,^X-L,,'`
> servent.m4:32: LIBS="$LIBS -lws2_32"
> socketlib.m4:20:  LIBS="$LIBS -lws2_32"
> socketlib.m4:34:  LIBSOCKET='-lws2_32'

'compile' handles -o, -l, -L, -Xlinker and -Wl, plus posix->win32 path
conversion. Maybe I have forgotten something but I think that's it.

Cheers,
Peter



Re: autoconf + automake support for MSVC

2011-09-03 Thread Peter Rosin
Hi Stefano,

Den 2011-09-03 09:41 skrev Stefano Lattarini:
> On Saturday 03 September 2011, Peter Rosin wrote:
>>
>> [BIG SNIP]
>>
>> I don't think cccl is the future, I see it as the past. It's
>> simply not needed when the needed bits are already in 'compile'.
>>
>> It's a bit sad to see all the effort going into writing private
>> scripts wrapping cl into something that looks like gcc, when the
>> effort could be spent making autotools just work instead.
>>
> For what concerns this: are you willing to re-submit your patch
> series about AM_PROG_AR to automake-patches? I will try hard to
> look into it, if you are willing to do the required testing and
> to patiently explain to me the details I won't undertand (and
> be warned that there will probably be many of them, since I'm a
> total Windows noob).
> 
> Oh, also, before doing that, could you please merge the 'maint'
> branch into the 'msvc' branch? Or I can do that for you if you
> prefer (but then you'll have to double-check that the merge has
> been really successfull).

There was that little disagreement over how win32 portability
warnings should be handled. That should perhaps be resolved first?

Cheers,
Peter



Re: autoconf + automake support for MSVC

2011-09-05 Thread Peter Rosin
Den 2011-09-03 09:41 skrev Stefano Lattarini:
> Oh, also, before doing that, could you please merge the 'maint'
> branch into the 'msvc' branch? Or I can do that for you if you
> prefer (but then you'll have to double-check that the merge has
> been really successfull).

I have now merged maint into msvc, with no new FAILs in the
testsuite related to the merge.

Cheers,
Peter



Re: autoconf + automake support for MSVC

2011-09-09 Thread Peter Rosin
Den 2011-09-09 19:00 skrev Bruno Haible:
> Peter Rosin wrote:
>>> When configure.ac does not contain then AM_PROG_CC_C_O macro,
>>> what do you do? Add it manually? ...
>>
>> In that case, as stated above, you can just use compile/ar-lib as you'd
>> use cccl, the macros only trigger the use of the scripts when they are
>> needed (and the inclusion of the scripts in the package). If you know
>> that you need them it's not wrong to point to them from the start (as I
>> did for AR in above example configure invocation).
> 
> Thanks for this advice. I am starting to build gnulib testdirs with
> 
> $ ./configure --host=i586-pc-winnt --prefix=/usr/local/msvc \
>   CC="$HOME/msvc/compile cl -nologo" \
>   CFLAGS="-MD" \
>   CPPFLAGS="-I/usr/local/msvc/include" \
>   LDFLAGS="-L/usr/local/msvc/lib" \
>   LD="link" \
>   NM="dumpbin -symbols" \
>   STRIP=":" \
>   AR="$HOME/msvc/ar-lib lib" \
>   RANLIB=":" \
> 
> and at least configure runs fine and gives reasonable results. (Here
> $HOME/msvc/compile and $HOME/msvc/ar-lib are taken from the tip of the
> msvc branch in Automake.)
> 
>> Libtool 2.4 should do
> 
> How is that possible? Libtool contains special code for the platforms
>   cygwin*
>   mingw*
>   cegcc*
>   pw32*
>   interix*
> but none for
>   winnt*
> nor for
>   windows*
> 
> I think the $host_os value winnt should be treated similarly to mingw32
> or pw32, no?

The platform name was discussed a few years back on the libtool lists (I
think somewhere in the gigantic thread "[patch #6448] [MSVC 7/7] Add MSVC
Support" from August 2008 approximately), the outcome was that compiling
with cl for the MS C runtimes uses the same triplet as compiling with
gcc for the MS C runtimes. I.e. *-*-mingw32.

So, just drop the --host argument. (if you run in an MSYS shell, which I
assume?)

Cheers,
Peter



Re: autoconf + automake support for MSVC

2011-09-09 Thread Peter Rosin
Den 2011-09-09 19:27 skrev Bruno Haible:
> But since not all packages use the AM_PROG_CC_C_O macro (only the use of
> source files in directories without a Makefile.in requires it), I would
> better recommend to everyone to use CC="/path/to/compile cl -nologo"
> from the beginning.

But don't forget to send a patches upstream so that more projects work out
of the box. :-)

Cheers,
Peter



Re: autoconf + automake support for MSVC

2011-09-10 Thread Peter Rosin
Den 2011-09-10 02:22 skrev Bruno Haible:
> Peter Rosin wrote:
>> The platform name was discussed a few years back on the libtool lists (I
>> think somewhere in the gigantic thread "[patch #6448] [MSVC 7/7] Add MSVC
>> Support" from August 2008 approximately) [0], the outcome was that compiling
>> with cl for the MS C runtimes uses the same triplet as compiling with
>> gcc for the MS C runtimes. I.e. *-*-mingw32.
> 
> Actually, the set of triplets is not defined by libtool, but by config.sub.
> In config.sub the *-*-mingw32 appears to be in use for this platform already
> since May 2005 (see [1][2][3][4]).

>From time to time, I'm wondering if reusing *-*-mingw* for cl is the
right decision. The main benefits that I see are that you don't have to
cross compile if you are in MSYS and the special casing needed for MinGW
is generally needed for MSVC as well, so you don't have to sprinkle extra
variations for *-*-winnt* to a bunch of case statements.

> Fine with me. But in gnulib, we will have to make the distinction between
> mingw and msvc, because on mingw, the library libmingwex is part of the
> default runtime libraries, and it defines lots of symbols, from 'acosf' to
> 'wtoll'.

I'm ignorant to the details, so there's probably some good reason why that
isn't covered by func-by-func feature tests using the normal Autoconf paradigm?
I'm sure libmingwex is a lib that evolves, so gnulib presumably has to deal
with different levels of help from it. No help at all is just an extreme
of that (and some gnulib modules apparently needs to be updated to support
that extreme).

>> So, just drop the --host argument. (if you run in an MSYS shell, which I
>> assume?)
> 
> No, MSYS is too unreliable, I can't recommend that. I use Cygwin as a build
> environment.

But then you are in the 'fake' cross territory (according to the terminology
of the libtool manual), which is also not always desirable. E.g., if you use
absolute path names (I'm sure you try to avoid them) and Libtool, you need
to tell Libtool that you are faking the cross compile with an extra configure
argument:

lt_cv_to_tool_file_cmd=func_convert_file_cygwin_to_w32

See http://www.gnu.org/s/libtool/manual/libtool.html#Cygwin-to-MinGW-Cross

Other parts of the build system might not provide similar support, but if
you are in MSYS, MSYS is doing all the faking for you, so there's no need
to tell Libtool or anybody else.

(and for the record, my MSYS install seems very reliable to me)

Cheers,
Peter



Re: autoconf + automake support for MSVC

2011-09-11 Thread Peter Rosin
Den 2011-09-11 00:04 skrev Bruno Haible:
> Peter Rosin wrote:
>> The main benefits that I see are that you don't have to
>> cross compile if you are in MSYS
> 
> You have the wrong notion of "cross compile", if you think cross-compiling
> means that $host != $build. When I am building for i386-pc-linux-gnu from
> a x86_64-pc-linux-gnu machine, I am *not* cross-compiling. When you are
> building for i586-pc-mingw32 from i586-pc-cygwin, you are *not* cross-
> compiling. Cross-compiling means that the generated executable can not be
> run on the build machine.

I don't think I have the wrong notion, but I got insecure by your statement
and decided to read the fine Autoconf manual. I ended up reading the following
section back and forth a couple of times...

http://www.gnu.org/software/autoconf/manual/autoconf.html#Hosts-and-Cross-Compilation

The way I read it, I can see that you can get to your view of things *if*
you live in the Autoconf 2.13 era (10 years ago), but even then I don't
think your view has ever been the intended definition of a cross (but I
don't know).

How can a cross not be a cross anymore just because someone has installed
an emulator? That seems entirely bogus to me.

If you use Autoconf in the way the Autoconf manual advises, you should pass
both --host and --build to configure (if you specify any of them) and
Autoconf enters cross-compile mode if they are different. End of story?

> It can be necessary to use cygpath also when not cross-compiling. Gladly,
> the Automake emitted build uses it, and the 'compile' script from the
> Automake 'msvc' branch uses it as well.

Huh? I'm sure I disagree because we have different views of what a cross is.
In what event is cygpath needed to build a package for Cygwin using Cygwin
tools in a Cygwin environment? Maybe for some package trying to outsmart
Cygwin or otherwise access Win32 directly, but that's not your average
package...

>>> I use Cygwin as a build environment.
>>
>> But then you are in the 'fake' cross territory (according to the terminology
>> of the libtool manual), which is also not always desirable. E.g., if you use
>> absolute path names (I'm sure you try to avoid them) and Libtool, you need
>> to tell Libtool that you are faking the cross compile with an extra configure
>> argument:
>>
>>  lt_cv_to_tool_file_cmd=func_convert_file_cygwin_to_w32
>>
>> See http://www.gnu.org/s/libtool/manual/libtool.html#Cygwin-to-MinGW-Cross
> 
> I am not faking a cross-compile; I am not cross-compiling at all.
> Libtool needs to be fixed to use cygpath when needed, without requiring an 
> extra
> configure argument. That configure argument is implicit from $host and $build.

According to the Libtool manual, you are "faking" it. Live with it :-)
And why should it be the duty of Libtool to tidy up after your "obsolete",
"do not rely on the following" (according to the Autoconf manual) use of
configure?

>> MSYS is doing all the faking for you
> 
> It does it in a buggy way. I pass the argument "/dev/null" to a program, and
> the MSYS execve call silently converts it to "nul" (or vice versa, I don't
> remember). That's not reliable, because not all program arguments are meant
> to be file names. To do it correctly, file name conversions need to be 
> explicit.
> That's what cygpath is for.

Well, nul (or NUL: or whatever the exact translation is) is to Win32 what
/dev/null is to Posix, so that example is by design. You don't like that
design decision, but in my book it's a bit harsh to call that particular
feature a bug.

The whole point of MSYS is to provide automatic path translation, it's
the defining difference between Cygwin and MSYS. The fact that you can't
do it perfectly is why the Cygwin project rejects the temptation. But for
the majority of cases where it works it is very handy. When it doesn't
work it is of course obnoxious, but that is just the way it is. You can't
eat the cake and still have it.

Cheers,
Peter



Re: autoconf + automake support for MSVC

2011-10-19 Thread Peter Rosin
Stefano Lattarini skrev 2011-09-03 09:41:
> For what concerns this: are you willing to re-submit your patch
> series about AM_PROG_AR to automake-patches? I will try hard to
> look into it, if you are willing to do the required testing and
> to patiently explain to me the details I won't undertand (and
> be warned that there will probably be many of them, since I'm a
> total Windows noob).

It is not a patch series, it is single patch that adds a new
macro that is modeled after AM_PROG_CC_C_O, some tests to catch
regressions and a plethora of trivial updates to the testsuite.
Ah, and the little portability warning of course, triggered when
building libraries w/o AM_PROG_AR in configure...

Anyway, I have rebased the patch on top of the current msvc branch
and have added fixes for fallout in a few new tests etc.

The testsuite is aaliiingg along, I'll
post the updated patch as soon as it finishes satisfactory. I just
wanted to post this in case it improves the odds of making the
release...

Cheers,
Peter



Re: autoconf + automake support for MSVC

2011-10-19 Thread Peter Rosin
Stefano Lattarini skrev 2011-10-19 15:59:
> On Wednesday 19 October 2011, Peter Rosin wrote:
>> Stefano Lattarini skrev 2011-09-03 09:41:
>>> For what concerns this: are you willing to re-submit your patch
>>> series about AM_PROG_AR to automake-patches? I will try hard to
>>> look into it, if you are willing to do the required testing and
>>> to patiently explain to me the details I won't undertand (and
>>> be warned that there will probably be many of them, since I'm a
>>> total Windows noob).
>>
>> It is not a patch series, it is single patch that adds a new
>> macro that is modeled after AM_PROG_CC_C_O, some tests to catch
>> regressions and a plethora of trivial updates to the testsuite.
>>
> But then we should also add a new `windows' (or better `msvc'?) warning
> category, so that we won't force users not interested in MSVC portability
> to choose between a mandated use of the new macro (which would probably
> be perceived as gratuitous bloating) and the forsaking of all the
> portability warnings (which is bad, bad, bad).  I don't care whether
> this new warning category is introduced by a preparatory patch or by a
> follow-up one, as long as it's in place before a merge to `maint' takes
> place.

I'm not too fond of any of these names. What if some other non-POSIX
archiver materializes? And it seems philosophically wrong to add something
as visible as a warning category named after some random 3rd-party-company
or non-free-tool.

Perhaps -Wno-portability-extra, -Wno-extra-portability or
-Wno-extreme-portability?


Hmmm, I think my favorite so far is -Wextra-portability, and I think
I would like it to work like this:

-Wall -> *all* warnings.
-Wportability -> portability but not extra-portability
-Wextra-portability -> portability *and* extra-portability
-Wall -Wno-extra-portability -> Everything but extra-portability.
-Wall -Wno-portability -> Neither portability nor extra-portability.

So, the special cases are that turning on extra-portability also
turns on portability, and turning off portability also turns off
extra-portability. Is that too complicated? Should it simply be
two orthogonal categories instead?

Which, if any, of --gnits, --gnu and --foreign should turn on
extra-portability?

>> Ah, and the little portability warning of course, triggered when
>> building libraries w/o AM_PROG_AR in configure...
>>
> Yep, see above.  And today I agree with you that this warning should be
> enabled by `-Wall'.

Good, let's keep it that way :-)

>> Anyway, I have rebased the patch on top of the current msvc branch
>> and have added fixes for fallout in a few new tests etc.
>>
>> The testsuite is aaliiingg along, I'll
>> post the updated patch as soon as it finishes satisfactory. I just
>> wanted to post this in case it improves the odds of making the
>> release...
>>
> I'd give at least three weeks before the 1.11.2 beta(s), so there no
> need to hurry excessively.  But thanks for the heads-up.

A few testsuite runs and three weeks is gone in a hurry...

Cheers,
Peter



Re: autoconf + automake support for MSVC

2011-10-19 Thread Peter Rosin
Peter Rosin skrev 2011-10-19 18:03:
> Stefano Lattarini skrev 2011-10-19 15:59:
>> On Wednesday 19 October 2011, Peter Rosin wrote:
>>> Stefano Lattarini skrev 2011-09-03 09:41:
>>>> For what concerns this: are you willing to re-submit your patch
>>>> series about AM_PROG_AR to automake-patches? I will try hard to
>>>> look into it, if you are willing to do the required testing and
>>>> to patiently explain to me the details I won't undertand (and
>>>> be warned that there will probably be many of them, since I'm a
>>>> total Windows noob).
>>>
>>> It is not a patch series, it is single patch that adds a new
>>> macro that is modeled after AM_PROG_CC_C_O, some tests to catch
>>> regressions and a plethora of trivial updates to the testsuite.
>>>
>> But then we should also add a new `windows' (or better `msvc'?) warning
>> category, so that we won't force users not interested in MSVC portability
>> to choose between a mandated use of the new macro (which would probably
>> be perceived as gratuitous bloating) and the forsaking of all the
>> portability warnings (which is bad, bad, bad).  I don't care whether
>> this new warning category is introduced by a preparatory patch or by a
>> follow-up one, as long as it's in place before a merge to `maint' takes
>> place.
> 
> I'm not too fond of any of these names. What if some other non-POSIX
> archiver materializes? And it seems philosophically wrong to add something
> as visible as a warning category named after some random 3rd-party-company
> or non-free-tool.
> 
> Perhaps -Wno-portability-extra, -Wno-extra-portability or
> -Wno-extreme-portability?
> 
> 
> Hmmm, I think my favorite so far is -Wextra-portability, and I think
> I would like it to work like this:
> 
> -Wall -> *all* warnings.
> -Wportability -> portability but not extra-portability
> -Wextra-portability -> portability *and* extra-portability
> -Wall -Wno-extra-portability -> Everything but extra-portability.
> -Wall -Wno-portability -> Neither portability nor extra-portability.
> 
> So, the special cases are that turning on extra-portability also
> turns on portability, and turning off portability also turns off
> extra-portability. Is that too complicated? Should it simply be
> two orthogonal categories instead?
> 
> Which, if any, of --gnits, --gnu and --foreign should turn on
> extra-portability?

Here we go. Add did a second patch with the new warning category. I'm sending
the series as replies to this message but will move to automake-patches
instead.

Cheers,
Peter



Manual merges.

2011-10-21 Thread Peter Rosin
Hi!

I checked to see what would happen if I merged maint back into msvc after
commiting the AM_PROG_AR series, and there is some minor testsuite fallout
that needs to be fixed manually.  My plan was to amend the merge commit
with the fixups, along with a note in ChangeLog pointing to what files
I'm touching etc.  The question is how I should label the resulting commit.

The merges normally get commit messages like

> Merge branch 'maint' into msvc

but that does not look like a normal ChangeLog header. But that's the best
I can think of.

So, a commit message along these lines:

> Merge branch 'maint' into msvc
> 
> * tests/foo.test: Adjust to new portability requirements due
> to the new AM_PROG_AR macro.
> * tests/bar.test: Likewise.

And this in ChangeLog:

> 4711-17-42  Peter Rosin  
> 
>   Merge branch 'maint' into msvc
>   * tests/foo.test: Adjust to new portability requirements due
>   to the new AM_PROG_AR macro.
>   * tests/bar.test: Likewise.

Sounds like a plan?

Cheers,
Peter



Re: Manual merges.

2011-10-21 Thread Peter Rosin
Hi Stefano,

Stefano Lattarini skrev 2011-10-21 10:17:
> On Friday 21 October 2011, Peter Rosin wrote:
>> Hi!
>>
> Hi Peter.
> 
>> I checked to see what would happen if I merged maint back into msvc after
>> commiting the AM_PROG_AR series, and there is some minor testsuite fallout
>> that needs to be fixed manually.  My plan was to amend the merge commit
>> with the fixups, along with a note in ChangeLog pointing to what files
>> I'm touching etc.  The question is how I should label the resulting commit.
>>
>> The merges normally get commit messages like
>>
>>> Merge branch 'maint' into msvc
>>
>> but that does not look like a normal ChangeLog header. But that's the best
>> I can think of.
>>
>> So, a commit message along these lines:
>>
>>> Merge branch 'maint' into msvc
>>>
>>> * tests/foo.test: Adjust to new portability requirements due
>>> to the new AM_PROG_AR macro.
>>> * tests/bar.test: Likewise.
>>
>> And this in ChangeLog:
>>
>>> 4711-17-42  Peter Rosin  
>>>
>>> Merge branch 'maint' into msvc
>>> * tests/foo.test: Adjust to new portability requirements due
>>> to the new AM_PROG_AR macro.
>>> * tests/bar.test: Likewise.
>>
>> Sounds like a plan?
>>
> Yes.  But I can also see two other possibilities:
> 
>  - Fix the failing tests in a follow-up patch; this way, you can write
>a "usual" commit message and a "usual" ChangeLog entry.

But then you get failing tests for one commit. Could be a nuisance for
someone later doing a bisect.  Other projects "forbid" that, and I'm
quite understanding of that position.

>  - Amend the merge commit, but instead of adding a new ChangeLog entry,
>edit the one(s) from the commits you are merging (while keeping the
>commit message of the merge as in your example).

Yes, in this case however, that edit in the ChangeLog would be a nop
since the original entry has a blanket "All relevant tests: ..."
covering the needed changes. So, the change would be "silent" according
to the ChangeLog.  At least if the ChangeLog is badly ordered, which it
tend to be, and resorting to sorting (argh, no pun intended) it manually
seems like a massive tool failure.

If you actually do have a ChangeLog entry that can be edited to also list
the new fixup, it would look very strange if that entry ends up before
the entry which added a new file that needed fixup in the merge. Especially
if the new file was added in a change newer than the change causing the
new file to need fixup in the merge. Unless you doctor the date on the
ChangeLog causing the fixup to be needed, but doing that is just plain
evil.

Should I perhaps file a bug that the ChangeLog file should be generated?

> I'm not sure which of these three options (the one proposed by you
> and the one proposed by me) would be preferable, and I must say I have
> no real preference either.  So you choose (unless Someone Else wants
> to chime in and override my advice ;-)

I think my suggestion is the only one without drawbacks for the general
case when manual merges are required.

Cheers,
Peter



Re: [RFC] Releasing automake 1.11.2

2011-10-30 Thread Peter Rosin
Stefano Lattarini skrev 2011-10-16 17:44:
> Hello automakers.
> 
> I think it's about time to release automake 1.11.2 -- the `maint'
> branch contains various bug fixes w.r.t. the 1.11.1 release (some
> of them quite important), and offers some new small features and
> various warnings/deprecations that should prepare the users for the
> backward-incompatible changes planned for automake 1.12 (so, the
> more 1.11.2 precedes 1.12, the more these warnings will have a
> chance to be effective).

Just a heads up, but master has a few changes to the depcomp and
compile scripts coming from the msvc branch that really
should be merged into the branch-1.11 branch (via maint I suppose)
before the release of 1.11.2. The changes were merged into master
with f74062b3 (merging da15b997) in case it is not desired the
merge the current state of the msvc branch.

Cheers,
Peter



Re: [RFC] Releasing automake 1.11.2

2011-10-30 Thread Peter Rosin
Peter Rosin skrev 2011-10-30 18:25:
> Stefano Lattarini skrev 2011-10-16 17:44:
>> Hello automakers.
>>
>> I think it's about time to release automake 1.11.2 -- the `maint'
>> branch contains various bug fixes w.r.t. the 1.11.1 release (some
>> of them quite important), and offers some new small features and
>> various warnings/deprecations that should prepare the users for the
>> backward-incompatible changes planned for automake 1.12 (so, the
>> more 1.11.2 precedes 1.12, the more these warnings will have a
>> chance to be effective).
> 
> Just a heads up, but master has a few changes to the depcomp and
> compile scripts coming from the msvc branch that really
> should be merged into the branch-1.11 branch (via maint I suppose)
> before the release of 1.11.2. The changes were merged into master
> with f74062b3 (merging da15b997) in case it is not desired the

s/desired the/desired to/

> merge the current state of the msvc branch.

Sorry for the confusion, but the latest commit from the msvc branch
currently merged into master is 38846c5f, which was apparently merged
via the tests-init branch. That was right before the recent round of
AM_PROG_AR commits. And 38846c5f is just a few "obvious" changes
after a change to the compile script, so it is a sensible point to
merge into maint should it not be desirable to merge msvc wholesale.
(However, from my POV, I think it is indeed desirable to just merge
msvc into maint/branch-1.11 before the release. Of course.)

Cheers,
Peter



Re: Could automake-generated Makefiles required GNU make?

2011-11-24 Thread Peter Rosin
Stefano Lattarini skrev 2011-11-21 21:56:
>> Stefano Lattarini wrote:
>>>because GNU make is very
>>> portable and easy to build and install (and free from bootstrapping
>>> problems AFAIK), and because the incompatibilities between different
>>> make versions are so appalling.

There is one possibly hard bootstrapping problem. What if you want to
deploy some package that does not need a C compiler on some system that
lacks both a C compiler and GNU Make? You would have problems there for
sure. Some number-crunching fortran-centric piece comes to mind, or some
locked down financial system where a cobol compiler or something is
present, but no C compiler.

But it's down in the bedrock below the basement if you're going by the
number of affected systems of course...

So, my point is that the requirement is not "GNU Make and build it if
missing", it's "GNU Make /or/ GNU Make sources plus a C compiler".

(plus whatever else is needed to build GNU Make, a shell I suppose)

Cheers,
Peter



make dist-lzma and make dist-xz

2011-12-12 Thread Peter Rosin
Hi!

I noticed that the changes to "make dist-xz" to default to -e fixed
the xz.test on MinGW, but that lzma.test still fails (lzma: (stdin): Not
enough memory). Hoping to fix the last fail in the testsuite, I looked
into adding one something like LZMA_OPT or something to "make dist-lzma".
But in my cursory googling, I could not my grips around what variable
to use. Does anyone know?

However, what I did find was that there is a variable named XZ_DEFAULT.
Shouldn't that be considered before "forcing" -e when XZ_OPT is missing?

Cheers,
Peter



Re: make dist-lzma and make dist-xz

2011-12-12 Thread Peter Rosin
Peter Rosin skrev 2011-12-12 10:14:
> Hi!
> 
> I noticed that the changes to "make dist-xz" to default to -e fixed
> the xz.test on MinGW, but that lzma.test still fails (lzma: (stdin): Not
> enough memory). Hoping to fix the last fail in the testsuite, I looked
> into adding one something like LZMA_OPT or something to "make dist-lzma".
s/ one//
> But in my cursory googling, I could not my grips around what variable
s/my grips/get &/
> to use. Does anyone know?
> 
> However, what I did find was that there is a variable named XZ_DEFAULT.
s/DEFAULT/&S/
> Shouldn't that be considered before "forcing" -e when XZ_OPT is missing?

Sorry for all the typos, the most important one is that the variable is
named XZ_DEFAULTS.

Cheers,
Peter



Re: make dist-lzma and make dist-xz

2011-12-14 Thread Peter Rosin
Stefano Lattarini skrev 2011-12-13 18:17:
> oOn Monday 12 December 2011, Peter Rosin wrote:
>> Hi!
>>
> Hi Peter.
> 
>> I noticed that the changes to "make dist-xz" to default to -e fixed
>> the xz.test on MinGW, but that lzma.test still fails (lzma: (stdin): Not
>> enough memory).
>>
> [BTW: thanks for your continuous testing with Cygwin and MinGW!]
> 
>> Hoping to fix the last fail in the testsuite, I looked
>> into adding something like LZMA_OPT or something to "make dist-lzma".
>> But in my cursory googling, I could not get my grips around what variable
>> to use. Does anyone know?
>>
> I don't, but I have another consideration: since lzma seems to be supersed
> by xz [1][2], couldn't we simply deprecate the `dist-lzma' option in this
> 11.1.2 version, and remove it altogether from the next major (or even minor)
> version?  WDYT?

That seems a bit drastic to me.

>> However, what I did find was that there is a variable named XZ_DEFAULT.
>> Shouldn't that be considered before "forcing" -e when XZ_OPT is missing?
>>
> From the xz manpage:
> 
>   XZ_DEFAULTS
> User-specific  or  system-wide default options.  Typically this is
> set in a shell initialization script to enable xz's memory usage
> limiter by default.  Excluding shell initialization scripts and
> similar special cases, scripts must never set or unset XZ_DEFAULTS.
> 
> So XZ_DEFAULTS is mostly meant to allow the user to set default memory
> usage limiters; since we don't touch such a setting in our use of
> XZ_OPT, I say we should be pretty safe.  Moreover:
> 
>   XZ_OPT
> This is for passing options to xz when it is not possible to
> set the options directly on the xz command line ...
> Scripts may use XZ_OPT e.g. to set script-specific default
> compression options.  It is still recommended to allow users
> to override XZ_OPT if that is reasonable ...
> 
> and our use of XZ_OPT seems consistent with the advice given here.

Yes, I guess you are right.

I looked a bit further and it seems newer lzma compressors, probably those
implemented as symlinks to the xz binary, also respond to XZ_DEFAULTS and
XZ_OPT. So, I can make the lzma.test pass with XZ_DEFAULTS=--memlimit=250MiB
in the environment. I wonder how long that workaround will last...

However, older lzma binaries probably don't look at the XZ variables. So,
copying the XZ_OPT code from dist-xz to dist-lzma is probably a no-no.

Cheers,
Peter



Re: make dist-lzma and make dist-xz

2011-12-14 Thread Peter Rosin
Stefano Lattarini skrev 2011-12-14 15:39:
> On Wednesday 14 December 2011, Peter Rosin wrote:
>> Stefano Lattarini skrev 2011-12-13 18:17:
>>> On Monday 12 December 2011, Peter Rosin wrote:
>>>>
>>>> Hoping to fix the last fail in the testsuite, I looked
>>>> into adding something like LZMA_OPT or something to "make dist-lzma".
>>>> But in my cursory googling, I could not get my grips around what
>>>> variable to use. Does anyone know?
>>>>
>>> I don't, but I have another consideration: since lzma seems to be
>>> supersed by xz [1][2], couldn't we simply deprecate the `dist-lzma'
>>> option in this 11.1.2 version, and remove it altogether from the
>>> next major (or even minor) version?  WDYT?
>>
>> That seems a bit drastic to me.
>>
> Why?  After all, lzma is deprecated by its own author in favor of xz, and
> using a deprecated format/program to distribute software whose tarballs
> should remain available for various years at least (as is the case with
> most GNU software) seems like a bad idea to me.
> 
> BTW, I like Bob's idea that the `dist-lzma' option could just start
> producing `.xz' tarballs to avoid a too-sudden backward-incompatibility,
> so I'll likely go down that road (after 1.11.2).  Opinions?

Well, lzma produces LZMA streams and xz produces LZMA2 streams. As far as
I'm aware, they are not compatible, and older lzma implementations do
not handle LZMA2 streams. So, switching between them underneath the user is
not very polite. Given that the only problem with dist-lzma seems to be
that the test is failing on some minorish systems, I think it's a bit
drastic to just remove the support this soon.

But I, personally, don't care one whit if dist-lzma goes out the window.
I never used it for anything real and I couldn't care less.

Cheers,
Peter



Merging the msvc branch into maint

2011-12-21 Thread Peter Rosin
Hi!

Since the msvc branch has been merged into both branch-1.11 and master,
it seems natural to also merge it into maint. No?

Currently maint holds an outdated version of e.g. lib/compile.

Cheers,
Peter



Re: Merging the msvc branch into maint

2011-12-22 Thread Peter Rosin
Stefano Lattarini skrev 2011-12-22 09:41:
> On 12/22/2011 08:26 AM, Peter Rosin wrote:
>> Hi!
>>
> Hi Peter.
> 
>> Since the msvc branch has been merged into both branch-1.11 and master,
>> it seems natural to also merge it into maint. No?
>>
> I'd rather not.  First, it wouldn't be useful, since we do 1.11.x maintenance
> releases from branch-1.11 only, we plan to do the next 1.12 release from
> master, and both of these branches already contain the features from msvc.

I'm ok with that.  However, ...

> Second, and more important, the versions of msvc merged into branch-1.11 and
> master are sligthly different, in that the one on branch-1.11 doesn't have
> the new `extra-portability' warnings enabled by -Wall (this is required for
> backward compatibility, which a maintenance version should pay particular
> attention to, but is not a behaviour we would want to carry in future
> versions, for reasons you had so eloquently explained in a past discussion).

... I don't believe this to be true.  The (important) differences you
describe are indeed part of branch-1.11, but not msvc. They were added to
the msvc-for-1.11 branch which was then merged into branch-1.11 leaving
the original msvc branch free from this issue.

The only (non-merge) commits in msvc that are not also in master are:
b722b108 "news: fix suboptimal wording"
620ba14f "tests: various minor tweakings, mostly related to AM_PROG_AR"

(unless something has been merged into msvc via maint that has not yet
been merged into master, but that *should* be benign)

Those two commits are already in branch-1.11, and I don't see how merging
msvc into maint is going to cause any trouble. And indeed a (throwaway)
merge of msvc into maint and then maint into master show only the
inevitable conflicts in NEWS and a trivial-looking conflict in syntax.test.

> So, if we merge msvc into maint as-is, that would create merge conflicts when
> we merge maint back into branch-1.11, and worse, would cause the code from
> maint to have a behaviour more similar to that of the next major version than
> to that of the next maintenance version.  OTOH, we could backport the hacks
> for 1.11.2 into maint, and confuse the already-too-messy automake history
> even more.  Neither of these two possibility should particularly appealing to
> me, given that in the end they do not offer any real advantage anyway.

This is a conclusion from your above faulty assumption, I believe,
and continuing the (throwaway) merging, merging maint into branch-1.11
after the above (naturally) adds nothing to branch-1.11.

But it was just a suggestion. If you don't want it, then I won't insist.

Cheers,
Peter



Re: Merging the msvc branch into maint

2011-12-22 Thread Peter Rosin
Stefano Lattarini skrev 2011-12-22 11:25:
> On 12/22/2011 10:54 AM, Peter Rosin wrote:
>> Stefano Lattarini skrev 2011-12-22 09:41:
>>> On 12/22/2011 08:26 AM, Peter Rosin wrote:
>>>
>>>> Since the msvc branch has been merged into both branch-1.11 and master,
>>>> it seems natural to also merge it into maint. No?
>>>>
>>> I'd rather not.  First, it wouldn't be useful, since we do 1.11.x 
>>> maintenance
>>> releases from branch-1.11 only, we plan to do the next 1.12 release from
>>> master, and both of these branches already contain the features from msvc.
>>
>> I'm ok with that.  However, ...
>>
>> [SNIP good explanation]
>>
> I've verified what you said by experimenting with a fresh automake.git clone, 
> and
> indeed you are right.  So sorry for the confusion, and thanks for correcting 
> me.
> 
> Still, even without the merge conflicts I had (erroneously) predicted, a 
> serious
> problem would remain with the msvc->maint merge, that is ...
> 
>>> ... worse, the code in maint would end up having a behaviour more similar to
>>> that of the next major version than to that of the next maintenance version.
>>> We could backport the hacks for 1.11.2 into maint, and confuse the
>>> already-too-messy automake history even more.  Neither of these two 
>>> possibility
>>> should particularly appealing to me, given that in the end they do not offer
>>> any real advantage anyway.
>>
>> This is a conclusion from your above faulty assumption, I believe,
>>
> It seems to me that the part of my argumentations quoted above is still 
> correct;
> could you explain in more details why do you think it is wrong?  Thanks.

No, I can't, because you are right. Oops, sorry. The behaviour of maint would
indeed be as in master with extra-portability being enabled by -Wall. But
currently lib/compile and lib/depcomp are ancient in maint compared to the
current version as available in both branch-1.11 and master, and *I* think
that is worse.

In retrospect, the 1.11 variant should have been developed first, then merged
into master via maint, then master could have been fixed to its current state.

Oh well. Let's forget about it.

Sorry for the noise, and cheers,
Peter



Re: Automake 1.11.2b test release

2012-01-26 Thread Peter Rosin
Stefano Lattarini skrev 2012-01-25 09:40:
> Please report bugs and problems to , and send
> general comments and feedback to .

Looks as usual on MinGW with cl, with only lzma.test failing as reported
previously (can be avoided with XZ_DEFAULTS=--memlimit=150MiB in the
environment, but I forgot about that).

Cheers,
Peter

$ export CC="/home/peda/automake-1.11.2b/lib/compile cl -nologo"
$ export CFLAGS=
$ export CXX="/home/peda/automake-1.11.2b/lib/compile cl -nologo"
$ export CXXFLAGS=
$ export NM="dumpbin -symbols"
$ export STRIP=:
$ export AR="/home/peda/automake-1.11.2b/lib/ar-lib lib"
$ export RANLIB=:
$ ./configure
checking whether make supports nested variables... yes
checking build system type... i686-pc-mingw32
checking for a BSD-compatible install... /bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking for perl... /bin/perl
checking whether /bin/perl supports ithreads... no
checking for tex... no
checking whether autoconf is installed... yes
checking whether autoconf works... yes
checking whether autoconf is recent enough... yes
checking whether ln works... yes
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for fgrep... /bin/grep -F
checking whether /bin/sh has working 'set -e' with exit trap... yes
configure: creating ./config.status
config.status: creating Makefile
config.status: creating contrib/Makefile
config.status: creating doc/Makefile
config.status: creating lib/Automake/Makefile
config.status: creating lib/Makefile
config.status: creating lib/am/Makefile
config.status: creating m4/Makefile
config.status: creating tests/Makefile
config.status: creating tests/defs
config.status: creating tests/aclocal-1.11
config.status: creating tests/automake-1.11

WARNING: You are about to use a beta version of automake.
WARNING: It might easily suffer from new bugs or regressions.
WARNING: You are strongly advised not to use it in production code.

Please report bugs, problems and feedback to .
$ make
Making all in lib
make[1]: Entering directory `/home/peda/automake-1.11.2b/lib'
Making all in Automake
make[2]: Entering directory `/home/peda/automake-1.11.2b/lib/Automake'
  GENConfig.pm
make[2]: Leaving directory `/home/peda/automake-1.11.2b/lib/Automake'
Making all in am
make[2]: Entering directory `/home/peda/automake-1.11.2b/lib/am'
make[2]: Nothing to be done for `all'.
make[2]: Leaving directory `/home/peda/automake-1.11.2b/lib/am'
make[2]: Entering directory `/home/peda/automake-1.11.2b/lib'
make[2]: Nothing to be done for `all-am'.
make[2]: Leaving directory `/home/peda/automake-1.11.2b/lib'
make[1]: Leaving directory `/home/peda/automake-1.11.2b/lib'
Making all in .
make[1]: Entering directory `/home/peda/automake-1.11.2b'
  GENautomake
  GENaclocal
make[1]: Leaving directory `/home/peda/automake-1.11.2b'
Making all in contrib
make[1]: Entering directory `/home/peda/automake-1.11.2b/contrib'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/home/peda/automake-1.11.2b/contrib'
Making all in doc
make[1]: Entering directory `/home/peda/automake-1.11.2b/doc'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/home/peda/automake-1.11.2b/doc'
Making all in m4
make[1]: Entering directory `/home/peda/automake-1.11.2b/m4'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/home/peda/automake-1.11.2b/m4'
Making all in tests
make[1]: Entering directory `/home/peda/automake-1.11.2b/tests'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/home/peda/automake-1.11.2b/tests'
$ make check -j8
Making check in lib
make[1]: Entering directory `/home/peda/automake-1.11.2b/lib'
Making check in Automake
make[2]: Entering directory `/home/peda/automake-1.11.2b/lib/Automake'
make[2]: Nothing to be done for `check'.
make[2]: Leaving directory `/home/peda/automake-1.11.2b/lib/Automake'
Making check in am
make[2]: Entering directory `/home/peda/automake-1.11.2b/lib/am'
make[2]: Nothing to be done for `check'.
make[2]: Leaving directory `/home/peda/automake-1.11.2b/lib/am'
make[2]: Entering directory `/home/peda/automake-1.11.2b/lib'
make[2]: Nothing to be done for `check-am'.
make[2]: Leaving directory `/home/peda/automake-1.11.2b/lib'
make[1]: Leaving directory `/home/peda/automake-1.11.2b/lib'
Making check in .
make[1]: Entering directory `/home/peda/automake-1.11.2b'
make[1]: Nothing to be done for `check-am'.
make[1]: Leaving directory `/home/peda/automake-1.11.2b'
Making check in contrib
make[1]: Entering directory `/home/peda/automake-1.11.2b/contrib'
make[1]: Nothing to be done for `check'.
make[1]: Leaving directory `/home/peda/automake-1.11.2b/contrib'
Making check in doc
make[1]: Entering directory `/home/peda/automake-1.11.2b/doc'
make[1]: Nothing to be done for `check'.
make[1]: Leaving directory `/home/peda/automake-1.11.2b/doc'
Ma

Re: Automake 1.11.2b test release

2012-01-27 Thread Peter Rosin
Stefano Lattarini skrev 2012-01-26 16:17:
> Hi Peter.
> 
> On 01/26/2012 04:08 PM, Peter Rosin wrote:
>> Stefano Lattarini skrev 2012-01-25 09:40:
>>> Please report bugs and problems to , and send
>>> general comments and feedback to .
>>
>> Looks as usual on MinGW with cl, with only lzma.test failing as reported
>> previously (can be avoided with XZ_DEFAULTS=--memlimit=150MiB in the
>> environment, but I forgot about that).
>>
> That's good news!
> 
> BTW, I can now report that the testsuite passes also on a Cygwin 1.5.25
> system (unfortunately, most Texinfo tests are skipped there, since TeX
> is not installed on the system).

And here's from Cygwin "1.7.10s(0.259/5/3) 20120123" (latest snapshot, with
release candidate "quality", 1.7.10 is coming RSN, as they say).

lzma.test fails for the same reason as above in MinGW, and transform2.test
is an old known failure. JFTR, my TeX install has been corrected with a tiny
patch (that has been mentioned previously on the Automake lists).

Cheers,
Peter

$ make check -j8
Making check in lib
make[1]: Entering directory `/home/peda/automake-1.11.2b/lib'
Making check in Automake
make[2]: Entering directory `/home/peda/automake-1.11.2b/lib/Automake'
make[2]: Nothing to be done for `check'.
make[2]: Leaving directory `/home/peda/automake-1.11.2b/lib/Automake'
Making check in am
make[2]: Entering directory `/home/peda/automake-1.11.2b/lib/am'
make[2]: Nothing to be done for `check'.
make[2]: Leaving directory `/home/peda/automake-1.11.2b/lib/am'
make[2]: Entering directory `/home/peda/automake-1.11.2b/lib'
make[2]: Nothing to be done for `check-am'.
make[2]: Leaving directory `/home/peda/automake-1.11.2b/lib'
make[1]: Leaving directory `/home/peda/automake-1.11.2b/lib'
Making check in .
make[1]: Entering directory `/home/peda/automake-1.11.2b'
make[1]: Nothing to be done for `check-am'.
make[1]: Leaving directory `/home/peda/automake-1.11.2b'
Making check in contrib
make[1]: Entering directory `/home/peda/automake-1.11.2b/contrib'
make[1]: Nothing to be done for `check'.
make[1]: Leaving directory `/home/peda/automake-1.11.2b/contrib'
Making check in doc
make[1]: Entering directory `/home/peda/automake-1.11.2b/doc'
make[1]: Nothing to be done for `check'.
make[1]: Leaving directory `/home/peda/automake-1.11.2b/doc'
Making check in m4
make[1]: Entering directory `/home/peda/automake-1.11.2b/m4'
make[1]: Nothing to be done for `check'.
make[1]: Leaving directory `/home/peda/automake-1.11.2b/m4'
Making check in tests
make[1]: Entering directory `/home/peda/automake-1.11.2b/tests'
make  defs aclocal-1.11 automake-1.11
make[2]: Entering directory `/home/peda/automake-1.11.2b/tests'
make[2]: `defs' is up to date.
make[2]: `aclocal-1.11' is up to date.
make[2]: `automake-1.11' is up to date.
make[2]: Leaving directory `/home/peda/automake-1.11.2b/tests'
make  check-TESTS
make[2]: Entering directory `/home/peda/automake-1.11.2b/tests'
make[3]: Entering directory `/home/peda/automake-1.11.2b/tests'
PASS: pm/Condition.pl
PASS: pm/Wrap.pl
PASS: pm/Version.pl
PASS: pm/DisjConditions.pl
SKIP: get-sysconf.test
PASS: self-check-env-sanitize.test
PASS: self-check-report.test
PASS: pm/Condition-t.pl
PASS: aclocal3.test
PASS: pm/DisjConditions-t.pl
PASS: aclocal.test
PASS: aclocal8.test
PASS: acloca11.test
PASS: aclibobj.test
PASS: aclocal9.test
PASS: acloca12.test
PASS: acloca15.test
PASS: acloca16.test
PASS: acloca17.test
PASS: aclocal6.test
PASS: acloca10.test
PASS: aclocal4.test
PASS: acloca19.test
PASS: acloca21.test
PASS: acloca13.test
PASS: aclocal5.test
PASS: aclocal-install-absdir.test
PASS: aclocal-print-acdir.test
PASS: acloca20.test
PASS: aclocal-path.test
PASS: aclocal-path-install.test
PASS: aclocal-path-nonexistent.test
PASS: aclocal-path-precedence.test
PASS: acloca18.test
PASS: aclocal-acdir.test
PASS: acoutnoq.test
PASS: acoutpt.test
PASS: aclocal-path-install-serial.test
PASS: acloca14.test
PASS: acoutqnl.test
PASS: acsilent.test
PASS: acoutpt2.test
PASS: acloca22.test
PASS: acsubst.test
PASS: acsubst2.test
XFAIL: all.test
PASS: acoutbs.test
PASS: aclocal7.test
PASS: acoutbs2.test
SKIP: amhello-cross-compile.test
PASS: amassign.test
PASS: alloca.test
PASS: alloca2.test
PASS: ammissing.test
PASS: amopt.test
PASS: amhello-binpkg.test
PASS: alpha.test
PASS: alpha2.test
PASS: all2.test
PASS: amsubst.test
PASS: ansi2.test
PASS: ansi4.test
PASS: amhello-cflags.test
PASS: ansi8.test
PASS: ansi.test
PASS: ansi3b.test
PASS: ansi3.test
PASS: ar-lib.test
PASS: ansi5.test
PASS: ar-lib2.test
SKIP: ar-lib5a.test
PASS: ar-lib3.test
PASS: libtool-macros.test
PASS: ansi9.test
PASS: ansi6.test
PASS: ar2.test
PASS: ar-lib7.test
PASS: ansi2knr-deprecation.test
PASS: ansi7.test
PASS: ar3.test
PASS: ar.test
PASS: ar4.test
PASS: a

Re: Automake 1.11.2b test release

2012-01-27 Thread Peter Rosin
Stefano Lattarini skrev 2012-01-27 17:16:
> On 01/27/2012 03:18 PM, Peter Rosin wrote:
*snip*
>> And here's from Cygwin "1.7.10s(0.259/5/3) 20120123" (latest snapshot, with
>> release candidate "quality", 1.7.10 is coming RSN, as they say).
>>
>> lzma.test fails for the same reason as above in MinGW,
>>
> Maybe it would be nice to start working around this before the release,
> by exporting "XZ_DEFAULTS=--memlimit=150MiB" in the environment?  If yes,
> extra points to whoever beats me at doing so ;-)

I'm not bothered enough, sorry, and there should be a way for the user
to override in case 150MiB (or whatever we select) is not appropriate, so
some care has to be taken...

What I don't get is what the f/#¤ they are doing stumbling over memory
issues when compressing what could best be described as tiny files, at
least compared to the memory that is available.  Crappy.

>> and transform2.test is an old known failure.
>>
> Has this already been reported in details and/or analysed?

There's this:
http://lists.gnu.org/archive/html/automake-patches/2010-08/msg00097.html

>> JFTR, my TeX install has been corrected with a tiny
>> patch (that has been mentioned previously on the Automake lists).
>>
> And I seem to recall you had reported the issue upstream to the Cygwin
> list as well ...  Hasn't it been fixed yet?

Nope, not that I know of.  Perhaps time for a ping...
http://cygwin.com/ml/cygwin/2011-11/msg00393.html

Cheers,
Peter



Re: Automake 1.11.2b test release

2012-01-27 Thread Peter Rosin
Stefano Lattarini skrev 2012-01-27 18:37:
> On 01/27/2012 05:43 PM, Peter Rosin wrote:
>> Stefano Lattarini skrev 2012-01-27 17:16:
>>> On 01/27/2012 03:18 PM, Peter Rosin wrote:
>> *snip*
>>>> And here's from Cygwin "1.7.10s(0.259/5/3) 20120123" (latest snapshot, with
>>>> release candidate "quality", 1.7.10 is coming RSN, as they say).
>>>>
>>>> lzma.test fails for the same reason as above in MinGW,
>>>>
>>> Maybe it would be nice to start working around this before the release,
>>> by exporting "XZ_DEFAULTS=--memlimit=150MiB" in the environment?  If yes,
>>> extra points to whoever beats me at doing so ;-)
>>
>> I'm not bothered enough, sorry, and there should be a way for the user
>> to override in case 150MiB (or whatever we select) is not appropriate, so
>> some care has to be taken...
>>
> Well, the lzma compression has been deprecated in favour of xz/lzip already,
> and the lzma support will be removed in automake 1.12 anyway; so I say we
> stop worrying about this issue until we see a bug report from a "real world"
> user, OK?

Fine by me, I don't worry about this one at all.

>>>> and transform2.test is an old known failure.
>>>>
>>> Has this already been reported in details and/or analysed?
>>
>> There's this:
>> http://lists.gnu.org/archive/html/automake-patches/2010-08/msg00097.html
>>
> Thanks for the link.  What do you think of the workaround provided by the
> attached patch?  Good to go before 1.11.3?

I haven't tested the patch, but if you ask me, I don't like "hiding" this
in a SKIP.  This is another case where a "local" XFAIL would be really
nice.  How impossible is it to introduce some kind of mechanism that a
test can trigger when it has detected some condition that makes a future
FAIL expected?

Let me know if you still want me to test your patch.

Cheers,
Peter



Re: Automake 1.11.2b test release

2012-01-28 Thread Peter Rosin
Stefano Lattarini skrev 2012-01-27 21:58:
> On 01/27/2012 09:40 PM, Peter Rosin wrote:
>> Stefano Lattarini skrev 2012-01-27 18:37:
>>
>>> Thanks for the link.  What do you think of the workaround provided by the
>>> attached patch?  Good to go before 1.11.3?
>>
>> I haven't tested the patch, but if you ask me, I don't like "hiding" this
>> in a SKIP.  This is another case where a "local" XFAIL would be really
>> nice.  How impossible is it to introduce some kind of mechanism that a
>> test can trigger when it has detected some condition that makes a future
>> FAIL expected?
>>
> It's already there -- in master only, where we have TAP support.

Ok, good!

>> Let me know if you still want me to test your patch.
>>
> Yes please, that would be appreciated.  We can apply the patch to maint as
> a band-aid, and switch to proper "localized XFAIL" once we merge to master
> (extra kudos to anyone who volunteers to implement such a follow-up ;-).

PASS on MinGW, SKIP on Cygwin. Good.

Cheers,
Peter



Re: automake 1.11.3 check-TESTS and command line length

2012-02-27 Thread Peter Rosin
Stefano Lattarini skrev 2012-02-22 21:54:
> On 02/22/2012 09:22 PM, Bob Friesenhahn wrote:
>> On Wed, 22 Feb 2012, Stefano Lattarini wrote:

>>> I don't understand how that patch could actually work ...  If there are
>>> too many tests in $(TESTS), there will be too many logs in $(TEST_LOGS),
>>> and since the recipe for $(TEST_SUITE_LOG) contents the expansion of
>>> $(TEST_LOGS), the command line length limit for "/bin/sh -c" will be
>>> exceeded anyway ...
>>
>> I don't claim to understand how the patch functions other than
>> that it seems to delegate responsibility to a subordinate make.
>>
> Basically, it was trying (and succeeding!) to pass the list of tests
> to the subordinate make on the stdin rather than on the command line,
> to avoid hitting any command-line length limit.  It was a pretty
> simple and clever hack.  The problem is that the list of tests will
> still *have* to be finally expanded in the recipe that creates
> 'test-suite.log', at which point all our attempt to avoid exceeding
> command line length limits in recursive make invocations becomes
> useless, no matter how clever of fast.
> 
 A GNU make dependency would be a real problem.  A dependency on a
 properly-working standard make would not be such a problem.

>>> Is the test suite actually failing due to "exceeded command line length"
>>> on any system different from Cygwin or MinGW?  If not, I don't see how a
>>> GNU make dependency would be a problem (if not for the fact that it
>>> uglifies the already hideous patch even more).
>>
>> I have only observed it on MinGW/MSYS but that does not mean it will
>> not appear on some Unix OS with unusually short command line length.
>>
> True; but I say, let's cross that bridge when come at it :-)
> 
>> A GNU make dependency would be a problem if it depends on GNU make
>> for other systems.
>>
> This wouldn't be the case: the change causes the GNU make specific code
> to be used only if ./configure determines that the make program in use
> is GNU make.  Otherwise, the old implementation would still been used.
> So, no regression for the non-GNU makes (even if the pre-existing
> command-line length isse wouldn't have been fixed for them).
> 
>> On MinGW, /bin/sh throws an error but I don't recall that it provides
>> a useful diagnostic like "exceeded command line length".  It seemed
>> to me that the argument list was truncated.
>>
 I have not run into any problems when using Ralf's patch, but it apparently
 does not address all issues so Ralf backed it out.

>>> In fact, it fundamentally fails to address the issues -- that's why Ralf
>>> backed it out.
>>
>> All I know is that with the patch I did not experience a failure on any
>> system.
>>
> That's weird; clearly I'm missing something in the big picture here ...
> Maybe Peter can chime in with his MinGW expertise and save the day? ;-)

I *think* the environment and the command line shares space (approx 64kB,
I repeat *think* here, I don't know the details off the top of my head, Cygwin
isn't affected since it uses the cygwin DLL to communicate this stuff between
cygwin processes using normal ipc mechanisms).  If I'm right, reducing the
command line length for a process might sufficiently elevate the limit, even
if it isn't eliminated.

I'm no expert in that area...

Cheers,
Peter



Re: Automake 1.11.3b test release

2012-03-28 Thread Peter Rosin
Stefano Lattarini skrev 2012-03-25 16:46:
> We are pleased to announce the Automake 1.11.3b test release.
*snip*
> Please report bugs and problems to , and send
> general comments and feedback to .

On an up-to-date Cygwin 1.7 install, nothing unexpected.  It's just the
old problem with lzma.test that is worked around with

export XZ_DEFAULTS=--memlimit=20MiB

And likewise on my MSYS/MinGW install with gcc, no FAILs with that export
in place.


But if I run with the following in the environment on MSYS:

export CC='/home/peda/automake-1.11.3b/lib/compile cl -nologo'
export CFLAGS=-MD
export CXX='/home/peda/automake-1.11.3b/lib/compile cl -nologo'
export CXXFLAGS=-MD
export NM='dumpbin -symbols'
export AR='/home/peda/automake-1.11.3b/lib/ar-lib lib'
export STRIP=:
export RANLIB=:

I do get a few FAILs:

FAIL: depcomp3.test
FAIL: libtool3.test
FAIL: pr307.test
FAIL: pr401b.test
FAIL: silent-many-generic.test

I think all these FAILs are related to switching to gcc/g++ but keeping
CFLAGS/CXXFLAGS, which is a well know problem which has been fixed on
the master branch.

If you are a bit more adventurous and run with the following in the
environment on MSYS

export CC='cl -nologo'
export CFLAGS=-MD
export CXX='cl -nologo'
export CXXFLAGS=-MD
export NM='dumpbin -symbols'
export AR=lib
export STRIP=:
export RANLIB=:

you get a few more FAILs

FAIL: depcomp3.test (gcc/CFLAGS)
FAIL: libtool3.test (gcc/CFLAGS)
FAIL: pr300-ltlib.test  (needs AM_PROG_AR)
FAIL: pr307.test(needs AM_PROG_AR)
FAIL: pr401b.test   (needs AM_PROG_AR)
FAIL: silent-many-generic.test  (flex and )
FAIL: subobj9.test  (see below)
FAIL: yacc-dist-nobuild-subdir.test ($MAKE -e brings in $CC)

subobj9 fails in an interesting way.  The test requires g++ (but not gcc),
and then proceeds with CC=cl and CXX=g++.  But cl wants OBJEXT=obj while g++
wants OBJEXT=o, and g++ wins (it is tested later).  This results in various
unexpected outcomes in further configure tests and things go downhill from
there.  The actual non-zero return that causes the test to finally fail is
that 'lib' is used as the archiver without the ar-lib wrapper, which of
course bombs out.

The other tests fail for reasons we have covered on master. I think so
anyway?  So, all things considered, I think we're in pretty good shape.

Cheers,
Peter



Static library naming

2012-09-19 Thread Peter Rosin
Hi!

When you write

lib_LIBRARIES = libhello.a

you express the desire to build an "hello" archive. Virtually
everywhere such an archive is expected to be named, tada,
libhello.a

Enter Windows. When using any and all toolchains not
originating from GNU, such an archive is expected to be named
hello.lib instead.  Anyway, at least this is the case for
Microsoft tools and I think the other major players follows
MS on this.

Sure, it still works to have libraries named libhello.a with
MS tools (since the linker assumes that files with unknown
extensions are object files, and archives happen to fit), but
it *feels* wrong and non-native. Besides, Libtool creates
hello.lib and hello-0.dll when it builds Libtool libraries
using MS tools. Automake ought to also follow the naming
convention of the platform.

So, since nothing is impossible, the question is how
impossible it would be to beat Automake into creating
hello.lib from the above rule?

Then there's the problem with other mentions of libhello.a,
such as in build rules and dependencies for other libraries.

The only way I can see this working is to create something
like @libhello_a@ in Makefile.in for all mentions of
libhello.a in Makefile.am, and then have configure replace
that with libhello.a or hello.lib when it creates the final
Makefile.

But that will probably crash and burn when conditionals
and variables etc are entered into the mix...

Thoughts?

Cheers,
Peter



Re: Static library naming

2012-09-23 Thread Peter Rosin
Hi Stefano!

Thanks for your input!

On 2012-09-20 16:36, Stefano Lattarini wrote:
> Adding the Automake-NG list in CC: (see below for the motivation).
> 
> On 09/20/2012 12:49 AM, Peter Rosin wrote:
>> Hi!
>>
> Hi Peter.
> 
>> When you write
>>
>> lib_LIBRARIES = libhello.a
>>
>> you express the desire to build an "hello" archive. Virtually
>> everywhere such an archive is expected to be named, tada,
>> libhello.a
>>
>> Enter Windows. When using any and all toolchains not
>> originating from GNU, such an archive is expected to be named
>> hello.lib instead.  Anyway, at least this is the case for
>> Microsoft tools and I think the other major players follows
>> MS on this.
>>
>> Sure, it still works to have libraries named libhello.a with
>> MS tools (since the linker assumes that files with unknown
>> extensions are object files, and archives happen to fit),
>>
>> [SNIP]
>>
> Then I'd say: if it ain't broken, don't fix it :-)

Well, it is currently broken in that the compile script does
not find libhello.a for the compiler driver when given -lhello
as input, but that's easily fixable of course. I have not
added that code exactly because I think it is the wrong
fix. There should not be any need for recognizing libhello.a
when the file should have be named hello.lib from the start.

And, I didn't leave this "hole" in the compile script on a whim,
as it fixes the last failure in the Libtool testsuite when using
MS tools, and it would have been really nice to "finish" that
part of the work. I didn't want to temporarily add libhello.a
support to the compile script because it is usually so much harder
to remove features than it is to add them. I.e. I want to see
the path forward before adding extra stuff like that, as you end
up not knowing when it is ok to remove the extra baggage.

>> but it *feels* wrong and non-native. Besides, Libtool creates
>> hello.lib and hello-0.dll when it builds Libtool libraries
>> using MS tools. Automake ought to also follow the naming
>> convention of the platform.
>>
> That might be something to be considered for Automake-NG, where we
> are anyway breaking backward-compatibility and changing some user
> level APIs (albeit "not too much", hopefully).  I'd say "please open
> a bug report, so we won't forget about the issue", but then I
> remember Automake-NG still lacks a dedicated bug tracker :-(  Which
> makes me realize it's time to get one at last; any help in doing so
> would be appreciated.
> 
>> So, since nothing is impossible, the question is how
>> impossible it would be to beat Automake into creating
>> hello.lib from the above rule?
>>
> Assuming we can change APIs and require GNU make (3.81 or later),
> how would you suggest to proceed?  If we manage to find a workable
> and simple approach, we could implement it in Automake-NG.

I have no insight into how much easier GNU make makes this.
So, I can't really comment on all sides of how "bad" it would
be to require Automake-NG. My concern here is not that GNU
make isn't available (it typically is), but instead that you
would have to use Automake-NG to implement this feature in
the build system. That would mean that projects concerned
most with portability (i.e. those not switching to Automake-NG
because they're not willing to abandon vendor makes) will not
have a way to properly support MS tools on Windows. I.e. those
maintainers most likely to accept these portability changes
will have no simple way to do so, as they would have to select
which kind of portability is most important: vendor makes or
the best MS support available.

>> Then there's the problem with other mentions of libhello.a,
>> such as in build rules and dependencies for other libraries.
>>
>> The only way I can see this working is to create something
>> like @libhello_a@ in Makefile.in for all mentions of
>> libhello.a in Makefile.am, and then have configure replace
>> that with libhello.a or hello.lib when it creates the final
>> Makefile.
>>
>> But that will probably crash and burn when conditionals
>> and variables etc are entered into the mix...
>>
> Yeah, I think we don't really want to go there.  We might simply
> document that the developer is expected to use proper indirections
> if he wants to support the MS tools in its package.  So that our
> developer declares (say):
> 
> lib_LIBRARIES = bar/foo  # No more bar/libfoo.a!
> 
> and Automake will generate a make variable (say) $(am_lib_foo) or
> $(libs/foo), that expands to 'bar/libfoo.a' when POSIX tools are
> in use

Re: Static library naming

2012-09-24 Thread Peter Rosin
On 2012-09-24 09:54, Stefano Lattarini wrote:
> On 09/23/2012 11:47 PM, Peter Rosin wrote:
>> On 2012-09-20 16:36, Stefano Lattarini wrote:
>>> On 09/20/2012 12:49 AM, Peter Rosin wrote:
>>>> When you write
>>>>
>>>> lib_LIBRARIES = libhello.a
>>>>
>>>> you express the desire to build an "hello" archive. Virtually
>>>> everywhere such an archive is expected to be named, tada,
>>>> libhello.a
>>>>
>>>> Enter Windows. When using any and all toolchains not

[SNIP]

>> This sounds ok to me, it would require adding support to the
>> compile script for -lhello -> libhello.a, but now we know why
>> we need it and that we actually do need it. The only concern is
>> that I'm not 100% happy with requiring Automake-NG, as outlined
>> above. So, how much harder is it to support this kind of stuff
>> in vanilla Automake? What are the obstacles? It looks just like
>> variable expansion to me, and how broken can that be? Is it
>> that a construct such as $(am_lib_foo) can't be portably used
>> in every place it would need to?
>>
>> Because you are surely not suggesting that support for the
>> current
>>  lib_LIBRARIES = libhello.a
>> is going away over this, are you?
>>
> Actually, I was suggesting precisely that :-/

I expect that to be a hard sell? And I see no clear way to select
non-standard library names like "hello.a".

>> (and that the reason for going to Automake-NG would be
>> this API-break)?
>>
> Exactly -- I suggested doing it only for Automake-NG since such
> a backward-incompatibility would be totally unacceptable in
> mainline Automake.
> 
> But then it occurs to me: we could make my proposal perfectly
> backward-compatibly by adding a new "special syntax" like:
> 
> lib_LIBRARIES = &hello

Or add a new primary

lib_PORTLIBS = hello

or

lib_ARCHIVES = hello

because the & sign is just plain ugly. Or if a new primary
is too "expensive", or if a GOOD primary name can't be found
(I'm not too fond of the above), perhaps

lib_LIBRARIES = lib(hello)

which seems very unlikely even for a non-standard library name.

But I don't really care all that much about the exact syntax.

> that will do all the translations and variable setting I proposed
> above, while continuing to support:
> 
> lib_LIBRARIES = libhello.a
> 
> seamlessly (but now with a warning in the 'extra-portability'
> category, starting from Automake 1.14 at least).
> 
> Does this sound better?

Yes, because I don't want this to be Automake-NG only. But since
this scheme is opt-in, the compile script "must" be adjusted to
handle libhello.a anyway. And with an adjusted compile script, I
suppose the acceptable level of ugliness elsewhere is lowered, as
it only fixes the (mostly) cosmetic naming problem.

Cheers,
Peter




Re: Static library naming

2012-09-26 Thread Peter Rosin
On 2012-09-25 21:33, Stefano Lattarini wrote:
> [Dropping Automake-NG list, from the next reply]
> 
> On 09/24/2012 10:51 AM, Peter Rosin wrote:
>>
>> [MEGA-SNIP]
>>
>> Yes, because I don't want this to be Automake-NG only. But since
>> this scheme is opt-in, the compile script "must" be adjusted to
>> handle libhello.a anyway. And with an adjusted compile script, I
>> suppose the acceptable level of ugliness elsewhere is lowered, as
>> it only fixes the (mostly) cosmetic naming problem.
>>
> So, if I understand correctly, you're saying that at this point it's
> better and simpler to just adjust the 'compile' script?  Or am I
> misunderstanding? (in which case, sorry to be so dense!)

Short answer: No, not quite, I still would like the possibility to
have Automake generate hello.lib without jumping through hoops.
Sorry for not being clear enough.

Long answer: I attempted to say that given the plan to require
projects to opt-in for hello.lib instead of hardcoding libhello.a,
the compile script has to accommodate those that don't opt-in and
thus also support the libhello.a naming. But after having done that,
nothing is broken except for the (mostly) cosmetic naming issue.
Given that its mostly a cosmetic issue, I just assumed that the
tolerable level of "crap" in the Automake code and usage was
significantly lowered, which in turn might influence what kind of
solutions to consider. I.e. a new primary (cleanest syntax, but
maybe too "rich"?) vs. some kind of encoding of the library name
(&hello, lib(hello) or just libhello, which is best?) vs. something
else (ideas welcome of course).

Yes, I can probably beat Automake into generate hello.lib when
MS tools are in use by jumping through hoops on a project by
project basis with conditionals and (untested) stuff like the
below example .am file, but that's not very concise and unlikely
to ever get accepted into anything who has a decent maintainer.
The work is also basically endless, where a fix in Automake at
least has a chance of spreading "automatically".

if MSTOOLS
LIBHELLO = hello.lib
lib_LIBRARIES = hello.lib
hello_lib_SOURCES = $(HELLOSOURCES)
else
LIBHELLO = libhello.a
lib_LIBRARIES = libhello.a
libhello_a_SOURCES = $(HELLOSOURCES)
endif
HELLOSOURCES = foo.c bar.c

bin_PROGRAMS = hello
hello_SOURCES = hello.c
hello_LDADD = $(LIBHELLO)

It would be nice to make that (something like):

lib_LIBRARIES = lib(hello)
libhello_SOURCES = foo.c bar.c

bin_PROGRAMS = hello
hello_SOURCES = hello.c
hello_LDADD = $(am_libhello)

and lose the MSTOOLS crap from configure.ac in the process.

BTW, maybe it is sufficient to simply drop the .a part of the
library name and use that to "encode" that os/tool-dependent
naming is desired? I.e. "lib_LIBRARIES = libhello" in the above
example. Because exactly how many users have created libraries
without any filename extension at all? The "strangest" names I
have found are a bunch of "noinst_LIBRARIES = lib.a" in newlib
(that usage is also present in some other projects) but I
guess that will not translate well to any new scheme we come
up with because .lib is probably not a good name for any
library. But maybe I simply didn't look hard enough for
extensionless foo_LIBRARY = names?

Cheers,
Peter




Re: Static library naming

2012-10-03 Thread Peter Rosin
On 2012-10-02 16:37, Stefano Lattarini wrote:
> Hi Peter, sorry again for the delay.
> 
> On 09/26/2012 10:40 AM, Peter Rosin wrote:
>> On 2012-09-25 21:33, Stefano Lattarini wrote:
>>> [Dropping Automake-NG list, from the next reply]
>>>
>>> On 09/24/2012 10:51 AM, Peter Rosin wrote:
>>>>
>>>> [MEGA-SNIP]
>>>>
>>>> Yes, because I don't want this to be Automake-NG only. But since
>>>> this scheme is opt-in, the compile script "must" be adjusted to
>>>> handle libhello.a anyway. And with an adjusted compile script, I
>>>> suppose the acceptable level of ugliness elsewhere is lowered, as
>>>> it only fixes the (mostly) cosmetic naming problem.
>>>>
>>> So, if I understand correctly, you're saying that at this point it's
>>> better and simpler to just adjust the 'compile' script?  Or am I
>>> misunderstanding? (in which case, sorry to be so dense!)
>>
>> Short answer: No, not quite, I still would like the possibility to
>> have Automake generate hello.lib without jumping through hoops.
>> Sorry for not being clear enough.
>>
>> Long answer: I attempted to say that given the plan to require
>> projects to opt-in for hello.lib instead of hardcoding libhello.a,
>> the compile script has to accommodate those that don't opt-in and
>> thus also support the libhello.a naming.
>>
> OK, clearer now.  This first step ("support the libhello.a name for
> those who don't opt-in") could also be done for Automake 1.12.5,
> if anyone is kind enough to provide a patch :-)

Attached (form maint).

> IMHO we should want to be clearer that something "magic" is
> going on.  So something like "lib(hello)" would still be better.
> (Determining the final colour of the bikeshed is left as
> exercise to the user ;-)

The one with the brush gets to decide, as usual. :-)

Cheers,
Peter
>From f54c5b1304766778b691aa88475a10c68e41f058 Mon Sep 17 00:00:00 2001
From: Peter Rosin 
Date: Thu, 4 Oct 2012 00:08:26 +0200
Subject: [PATCH] compile: support libfoo.a naming when wrapping Microsoft
 tools

There is a future plan to provide some means to have Automake
create static libraries that are named differently depending
on the system [1].

The background is that everyone has always named static libraries
libfoo.a, except the Redmond crowd who names them foo.lib, and
you have to jump through hoops to have Automake create libraries
named foo.lib in the land of non-GNU Windows while still creating
libfoo.a everywhere else.

However, there is probably no sane way to accomplish that system
dependent naming discussed in [1] without user intervention,
which makes it necessary to support the classic libfoo.a naming
when using Microsoft tools in the best possible way, for the
benefit of all projects today and for future projects not
opting in to whatever scheme is selected for the problem at
hand.

[1] http://lists.gnu.org/archive/html/automake/2012-09/msg00028.html

* lib/compile (func_cl_dashl): As a last resort, match -lfoo with
libfoo.a, if that file exist on the library search path.
* t/compile4.sh: Remove obsolescent workaround for the above.
* t/compile6.sh: Extend to check that libbaz.a is indeed found
when baz.lib and baz.dll.lib does not exist and that bar.lib
and bar.dll.lib are preferred over libbar.a.

Signed-off-by: Peter Rosin 
---
 lib/compile   |7 ++-
 t/compile4.sh |8 
 t/compile6.sh |   10 +-
 3 files changed, 15 insertions(+), 10 deletions(-)

diff --git a/lib/compile b/lib/compile
index 7b4a9a7..45cf039 100755
--- a/lib/compile
+++ b/lib/compile
@@ -1,7 +1,7 @@
 #! /bin/sh
 # Wrapper for compilers which do not understand '-c -o'.
 
-scriptversion=2012-03-05.13; # UTC
+scriptversion=2012-10-03.22; # UTC
 
 # Copyright (C) 1999-2012 Free Software Foundation, Inc.
 # Written by Tom Tromey .
@@ -112,6 +112,11 @@ func_cl_dashl ()
   lib=$dir/$lib.lib
   break
 fi
+if test -f "$dir/lib$lib.a"; then
+  found=yes
+  lib=$dir/lib$lib.a
+  break
+fi
   done
   IFS=$save_IFS
 
diff --git a/t/compile4.sh b/t/compile4.sh
index 2e275a3..e5b5c57 100755
--- a/t/compile4.sh
+++ b/t/compile4.sh
@@ -70,14 +70,6 @@ $MAKE
 
 ./compile cl $CPPFLAGS $CFLAGS -c -o "$absmainobj" "$absmainc"
 
-# cl expects archives to be named foo.lib, not libfoo.a so
-# make a simple copy here if needed. This is a severe case
-# of badness, but ignore that since this is not what is
-# being tested here...
-if test -f sub/libfoo.a; then
-  cp sub/libfoo.a sub/foo.lib
-fi
-
 # POSIX mandates that the compiler accepts a space between the -I,
 # -l and -L options and their respective arguments.  Traditionally,
 # this should work also without a 

Re: bug#13202: Make Microsoft Visual C recognize the .S file extension

2012-12-17 Thread Peter Rosin
Hi RheinlƤnder!

On 2012-12-17 01:41, RheinlƤnder wrote:
> Hi,
> 
> here is a suggestion how to make MSVC (cl.exe) recognize the standard
> extension for assembly code in C projects: Just add the compiler
> switch /Tc before the file inquestion, or use /TC toforce all files
> mentioned on the command line to be treated as C code.

Interesting!

This is best added to the 'compile' script which is "owned" by Automake,
so I have added a CC.

At one point I posted a hacked version of 'compile' to the libffi list,
but it relied on a separate preprocessing step, followed by directly
invoking the assembler (ml.exe).

See the attachment in
http://sourceware.org/ml/libffi-discuss/2012/msg00144.html

It seems very nice to get rid of the extra preprocessing step.

However, I can't get assembly to actually work with -Tc or -TC (see
below), so what am I doing wrong?

Cheers,
Peter

$ cat asm.S # stupid asm file generated from a trivial C file
; Listing generated by Microsoft (R) Optimizing Compiler Version 16.00.40219.01


TITLE   c:\home\peda\src\junk\asm.c
.686P
.XMM
include listing.inc
.model  flat

INCLUDELIB LIBCMT
INCLUDELIB OLDNAMES

PUBLIC  _main
; Function compile flags: /Odtp
_TEXT   SEGMENT
_main   PROC
; File c:\cygwin\home\peda\src\junk\asm.c
; Line 2
pushebp
mov ebp, esp
; Line 3
xor eax, eax
; Line 4
pop ebp
ret 0
_main   ENDP
_TEXT   ENDS
END

$ ml -c asm.S# assembly input is sane!
Microsoft (R) Macro Assembler Version 10.00.40219.01
Copyright (C) Microsoft Corporation.  All rights reserved.

 Assembling: asm.S

$ cl -c -Tcasm.S # but cl -Tc fails miserably...
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 16.00.40219.01 for 80x86
Copyright (C) Microsoft Corporation.  All rights reserved.

asm.S
asm.S(1) : error C2061: syntax error : identifier 'generated'
asm.S(1) : error C2059: syntax error : ';'
asm.S(1) : error C2061: syntax error : identifier 'Microsoft'
asm.S(1) : error C2059: syntax error : ';'
asm.S(1) : error C2061: syntax error : identifier 'Optimizing'
asm.S(1) : error C2059: syntax error : ';'
asm.S(1) : error C2061: syntax error : identifier 'Version'
asm.S(1) : error C2059: syntax error : ';'
asm.S(1) : error C2059: syntax error : 'constant'
asm.S(3) : error C2017: illegal escape sequence
asm.S(3) : error C2017: illegal escape sequence
asm.S(3) : error C2017: illegal escape sequence
asm.S(3) : error C2017: illegal escape sequence
asm.S(3) : error C2017: illegal escape sequence
asm.S(3) : error C2017: illegal escape sequence
asm.S(4) : error C2059: syntax error : 'bad suffix on number'
asm.S(13) : error C2061: syntax error : identifier 'compile'
asm.S(13) : error C2059: syntax error : ';'
asm.S(13) : error C2143: syntax error : missing '{' before ':'
asm.S(13) : error C2059: syntax error : ':'
asm.S(16) : error C2061: syntax error : identifier 'c'
asm.S(16) : error C2059: syntax error : ';'
asm.S(16) : error C2059: syntax error : ':'
asm.S(16) : error C2017: illegal escape sequence
asm.S(16) : error C2017: illegal escape sequence
asm.S(16) : error C2017: illegal escape sequence
asm.S(16) : error C2017: illegal escape sequence
asm.S(16) : error C2017: illegal escape sequence
asm.S(16) : error C2017: illegal escape sequence
asm.S(17) : error C2143: syntax error : missing '{' before 'constant'
asm.S(17) : error C2059: syntax error : ''
asm.S(20) : error C2143: syntax error : missing '{' before 'constant'
asm.S(20) : error C2059: syntax error : ''
asm.S(22) : error C2143: syntax error : missing '{' before 'constant'
asm.S(22) : error C2059: syntax error : ''



Re: bug#13202: Make Microsoft Visual C recognize the .S file extension

2012-12-17 Thread Peter Rosin
Hi Jan,

Please keep replies on list.

On 2012-12-17 11:28, RheinlƤnder wrote:
> Hello Peter,
> 
>> However, I can't get assembly to actually work with -Tc or -TC (see
>> below), so what am I doing wrong?
>>
> 
> OK, I looked a bit more deeply in it and discovered that my .S file is
> actually a C file wrapper for an assembly include... The files are from
> the CLN library, I attached them.
> 
> My real problem is not to compile the assembly code (I have set
> -DNO_ASM anyway) but that the MSVC make breaks on the .S extension.
> Though looking at it again maybe I should suggest instead to the CLN
> folks to rename their file to .cc since it is a C file anyway.

Ok, except it isn't really a C file. It's assembly intended to be
preprocessed by the C preprocessor (as is indicated by the .S extension).
I think the preprocessor will reduce the source to very little in case
NO_ASM is defined (but you didn't attach enough files for me to tell
for sure).

I therefore think the 'compile' script I referred to earlier may
handle your case; it will preprocess the .S file and feed the remaining
few bits to ml which in turn will do nothing, hopefully. But maybe ml
will miss an END directive? Or something? Untested...

Cheers,
Peter




Re: bug#13324: Improvements to "dist" targets

2013-01-02 Thread Peter Rosin
On 2013-01-02 14:04, Stefano Lattarini wrote:
> On 01/02/2013 02:01 PM, Stefano Lattarini wrote:
>> On 01/02/2013 02:58 AM, Daniel Herring wrote:
>>> On Tue, 1 Jan 2013, Stefano Lattarini wrote:
>>>
 OTOH, what about distribution "tarballs" in '.zip' format?  They don't
 use tar at all ...  Time to deprecate them maybe?  Is anybody actually
 using them?  And while at it, what about the even more obscure 'shar'
 format?
>>>
>>> While I haven't manipulated a shar file in years, but zip is still
>>> the dominant archive format on MS platforms.
>>>
>> While this is absolutely true, my point is that it's not a format truly
>> used or required for distribution tarballs.  If you are going to compile
>> an Automake-based package from source on MS Windows, you'll need either
>> MinGW/MSYS or Cygwin, and AFAICS both those environment comes with
>> working tar and gzip programs.
>>
>> Or is there something that I'm missing?

Yes, I believe quite a few projects have a separately maintained Visual
Studio solution, seeded with handwritten config.h etc, meaning that they
don't require Autotools to build from source on Windows.

I can't give you an example off the top of my head though, but I think
that e.g. ntp is like that (and I don't know if they also provide the
source as a .zip-file...)

Cheers,
Peter



Removal of INCLUDES in favour of AM_CPPFLAGS

2013-02-01 Thread Peter Rosin
Hi!

>From NEWS in the master branch:

  - Support for the long-obsolete $(INCLUDES) variable has
been finally removed, in favour of the modern equivalent
$(AM_CPPFLAGS).



Why is this removal important? It forces changes to a hundred
(or so) Makefiles in *one* project I'm involved with. The fact
that AM_CPPFLAGS is AC_SUBSTed by the project and used mostly
for "global" flags and INCLUDES mostly for "local" stuff makes
for a pretty useful separation. But in quite a few of those
Makefiles, AM_CPPFLAGS (as AC_SUBSTed by configure) is augmented
via "AM_CPPFLAGS +=" constructs. I'm not at all confident that
I will be able to convert all of these uses without errors due
to switched include ordering or omissions or whatever. Further,
I do not have access to all relevant systems, so I'm not in a
position to check for errors. If it was 5-10 Makefiles, I would
trust myself to do it correctly, but on this scale, doing the
conversion without error would just be pure luck.

Why have I not done the conversion a lot earlier, when INCLUDES
have been deprecated since forever? Sure, there has been a
warning in the docs, since about forever, but since the support
wasn't removed yesterday, why would it be removed today?
I have not seen any warning in my project, until yesterday when
I forced a bootstrap with automake 1.12.5. You see, there is
some code that ensures that a project keeps bootstrapping with
the same version if it is rebootstrapped (at least on my
distribution), so even if I have had 1.12 installed for a few
months, it hasn't actually been used for this project, and I
haven't noticed/cared.

Buttom line is, even if stuff has been deprecated for ages,
people like me might not have noticed, even if they have been
using /bin/automake (which is a wrapper over here) on a weekly
basis. Even if I had noticed, changing a bunch of constructs
with INCLUDES into only using AM_CPPFLAGS isn't the most
rewarding or interesting thing to do, so I would probably not
have done it anyway. I would only risk stupid regressions.

When I force automake 1.12, everything keeps working, at least
as far as I can tell. But if support for INCLUDES is removed,
it will break, needless to say.

Also, this quote from commit message removing INCLUDES support:

"So, by removing it in Automake 1.14, we will simplify
the transition path for people that want to switch to
Automake-NG."

is just brain-damage and completely ass-backwards, if you ask me.
Damnit, if there is a goal to make it easy to switch, that should
be the sole responsibility of Automake-NG. Especially for trivial
stuff like this. Period.

There is also a claim in some commit message, that the *code* has
warned about INCLUDES being obsolete since 2002 (which would be
automake 1.7, I suppose), but then I ask why Automake 1.9.x and
1.11.x isn't warning me about it? Oh right, because -Wobsolete
wasn't the default back then, but I guess that's not important.
(Did you catch the sarcastic tone?) Anyway, claiming that the
code has warned about it for more than a decade is a *huge*
misrepresentation. A saner statement is that Automake 1.11.6 as
of *2012* did not warn about INCLUDES.

Stop this insanely aggressive feature removal crap.

Cheers,
Peter



Re: Removal of INCLUDES in favour of AM_CPPFLAGS

2013-02-01 Thread Peter Rosin
Hi Stefano,

On 2013-02-01 10:35, Stefano Lattarini wrote:
> On 02/01/2013 09:45 AM, Peter Rosin wrote:
>> From NEWS in the master branch:
>>
>>   - Support for the long-obsolete $(INCLUDES) variable has
>> been finally removed, in favour of the modern equivalent
>> $(AM_CPPFLAGS).
>>
>> Why is this removal important? It forces changes to a hundred
>> (or so) Makefiles in *one* project I'm involved with. The fact
>> that AM_CPPFLAGS is AC_SUBSTed by the project and used mostly
>> for "global" flags and INCLUDES mostly for "local" stuff makes
>> for a pretty useful separation. But in quite a few of those
>> Makefiles, AM_CPPFLAGS (as AC_SUBSTed by configure) is augmented
>> via "AM_CPPFLAGS +=" constructs. I'm not at all confident that
>> I will be able to convert all of these uses without errors due
>> to switched include ordering or omissions or whatever.
>>
> Actually, while recently re-reading some of the "aggressive" changes
> of last, I have come to realize the same thing.  Since the removal
> of INCLUDES is only implemented in master, I saw no hurry in
> reverting it though; but reconsidering it was on the radar.  Bottom
> line: a patch in that direction would be welcome, especially if its
> commit message condenses the rationales you have given here.

Oh, that's a relief! Sorry for slamming down open doors...

>> Also, this quote from commit message removing INCLUDES support:
>>
>>  "So, by removing it in Automake 1.14, we will simplify
>>  the transition path for people that want to switch to
>>  Automake-NG."
>>
>> is just brain-damage and completely ass-backwards, if you ask me.
>> Damnit, if there is a goal to make it easy to switch, that should
>> be the sole responsibility of Automake-NG. Especially for trivial
>> stuff like this. Period.
>>
> I'm not happy to say this, but I must admit I agree with you now.
> 
> This wrong approach is probably the result of me trying to keep a foot
> in both camps -- that is, maintaining mainline Automake while trying
> to encourage a switch to Automake-NG in the long term.  Probably not a
> good move, for any of those projects.
> 
> I should at this point decide whether just devote my "Automake time"
> to mainline Automake (which amounts at letting Automake-NG die,
> basically) or to Automake-NG (after tying some loose ends in the
> mainline Automake code base, of course).

My intention was not to scare you away from either of the
projects!

And in fact, I just expressed how I think removing support for
INCLUDES is wrong, for *both* projects! There's no sitting on
two chairs here. It's just not the sort of change your users
ask for, and it should not have been made. It's perhaps the sort
of change you sometimes wish you can do as a maintainer, but
that doesn't mean it's a good idea to do it. It will only cause
churn, ripples and bugs for your users. It can be a good
idea to remove support for long deprecated stuff when it hinders
progress, but supporting INCLUDES will never hinder progress (I
fail to see how anyway). To me, the change was made just because
it was perceived as messy or redundant. But the messiest part
of the removed code was the deprecation warning. Carrying on
with the support for INCLUDES in automake costs nearly nothing.
Supporting INCLUDES in automake-NG costs nearly nothing. The
gain is obvious; why would you *ever* want to hinder (or kill)
the upgrade path deliberately?

I think there are two classes of deprecations. One happens only
in the manual, when a better interface is invented, but the
support for the old interface is trivial to keep. There is
seldom a reason to kill the support for the old interface in
this case. Also, you don't need to pester users with deprecation
warnings as you are not intending to remove the support anyway
(that's what MS does when they want to lure their customers
deeper into the lock-in. Deprecating fopen et.al., like they
are going to remove it? Yeah, right. Sheesh...). If you still
want a warning for this case it should definitely be off by
default. The other class is when there is some fundamental
technical problem with keeping support for the old interface,
and you actually intend to remove it somewhere down the line.
In this case, your users are going to need to switch interfaces,
and they better do it sooner rather than later. For their own
good. This is where a deprecation warning that is on by default
is useful.

All in my humble opinion, of source. Errm, of course.

Cheers,
Peter

PS. Keep up the good work. I apologize for being too blunt.




Re: [Automake-NG] Removal of INCLUDES in favour of AM_CPPFLAGS

2013-02-01 Thread Peter Rosin
On 2013-02-02 01:15, Eric Blake wrote:
> On 02/01/2013 05:00 PM, Peter Rosin wrote:
>> Supporting INCLUDES in automake-NG costs nearly nothing.
> 
> This, however, is a statement I'm not willing to concede; so while I
> agree with the decision to deprecate (but not remove) INCLUDES from
> automake, I think it is fair game to state that someone switching to
> Automake-NG should be prepared to avoid INCLUDES, as part of that switch.

Oh. I claim ignorance. I blindly assumed the implementation in -NG
was just as trivial as in plain old Automake. When there are technical
reasons to drop INCLUDES in Automake-NG, it's a totally different
situation. I then agree that it's perfectly ok to issue a (default
visible) deprecation warning in Automake, in order to enable an easy
upgrade path to -NG in the future.

I should have known that the removal wasn't as trivially stupid as
it looked at first sight...

Cheers,
Peter




Re: bug#13578: [IMPORTANT] Savannah issues

2013-02-25 Thread Peter Rosin
On 2013-02-23 19:06, Stefano Lattarini wrote:
> On 02/23/2013 06:46 PM, Stefano Lattarini wrote:
>> On 02/21/2013 04:06 PM, Stefano Lattarini wrote:
>>> In a couple of days, I will proceed with this "branch moving":
>>>
>>>* branch-1.13.2 -> maint
>>>* maint -> master
>>>* master -> next
>>>
>> Done.
>>
> Damn, not really.  For some questionable reason, Savannah is rejecting
> my non-fast-forward push to master even if I specify '--force', and
> I cannot use the usual trick "delete the remote branch, then push the
> local one to it" trick that I typically use to work around this
> problem, since 'master' is the "current branch" of the remote
> repository, and that cannot be deleted to avoid confusing "git clone".

I was not aware that those moves would be non-fast-forwards, and I
think this is bad bad bad. It's quite hostile to do non-fast-forwards
on branches as central as master and maint. And I think git/savannah
is rejecting them quite rightly!

master and maint have never been published as "rewindable", and it should
be correct to base new work on them. They should be left alone, IMHO.
You should have implemented this more gradually, such that next would
have taken its role directly, but maint and master should have been
allowed to grow into the correct branches once the relevant releases had
been made. Or even better, implement the change right after a major
release so that master and maint would have been correctly positioned
from the start.

I have a few single-commit local branches that I will simply have to
cherry-pick to the new world order. Or is there some better way to move
these branches after their base has been pulled from under them?
Hopefully there isn't some big chunk of unpublished work that will be
killed by these disruptive changes...

Cheers,
Peter




Re: bug#13578: [IMPORTANT] Savannah issues

2013-02-25 Thread Peter Rosin
On 2013-02-25 10:16, Stefano Lattarini wrote:
> On 02/25/2013 09:14 AM, Peter Rosin wrote:
>> On 2013-02-23 19:06, Stefano Lattarini wrote:
>>> On 02/23/2013 06:46 PM, Stefano Lattarini wrote:
>>>> On 02/21/2013 04:06 PM, Stefano Lattarini wrote:
>>>>> In a couple of days, I will proceed with this "branch moving":
>>>>>
>>>>>* branch-1.13.2 -> maint
>>>>>* maint -> master
>>>>>* master -> next
>>>>>
>>>> Done.
>>>>
>>> Damn, not really.  For some questionable reason, Savannah is rejecting
>>> my non-fast-forward push to master even if I specify '--force', and
>>> I cannot use the usual trick "delete the remote branch, then push the
>>> local one to it" trick that I typically use to work around this
>>> problem, since 'master' is the "current branch" of the remote
>>> repository, and that cannot be deleted to avoid confusing "git clone".
>>
>> I was not aware that those moves would be non-fast-forwards, and I
>> think this is bad bad bad.
>>
> Note that the users can avoid branch-rewriting issues by renaming their
> 'master' to 'next' and their 'maint' to 'master' before pulling.  This
> should probably be stated in a message (on list *and* on savannah news)
> advertising the new versioning and branching scheme (message not yet
> written; it will be once the current issue is sorted out).

Hiding stuff like that in some documentation or on a mailing list will
not help. You should make it *easy* for people to work on and contribute
to automake. Forcing everyone to do a bunch of silly boring renames
is not *easy*. It's an obstacle, and obstacles make people nervous
and uneasy. Not good, and no, you can't document it away.

>> It's quite hostile to do non-fast-forwards
>> on branches as central as master and maint. And I think git/savannah
>> is rejecting them quite rightly!
>>
> Savannah is rejecting all non-fast-forward pushes (which I find annoying);
> but it didn't prevent me from deleting and recreating maint, a change that
> will still appear as a non-fast-forward to any clone of our repository.
> 
> The reason it doesn't allow me to delete master as well is that doing so
> would prevent a "git clone" from checking out the sources of a freshly
> cloned automake, which can be very confusing (and of course, git cannot
> be aware of the fact that I intend to re-create 'master' just after
> having deleted it).

The reason is irrelevant. non-fast-forwards of central branches is evil.

>> master and maint have never been published as "rewindable", and it should
>> be correct to base new work on them. They should be left alone, IMHO.
>>
> Their content has been left alone in fact; it's their "name" that hasn't.
> 
>> You should have implemented this more gradually, such that next would
>> have taken its role directly, but maint and master should have been
>> allowed to grow into the correct branches once the relevant releases had
>> been made.
>>
> This would give a very confusing interim period IMHO.

Yes, confusing. Changes like this cause confusion.

> However, note that that we can still implement such a "gentler transition"
> (for 'master' only) if you really want to, by using a new branch name
> (maybe 'current' or 'devel') instead of 'master', keeping 'master' as a
> temporary "alias" to 'next' until the 2.0 release (at which point all of
> 'maint', 'master' and 'next' will be fast-forwarded to the commit that
> implements the 2.0 release).  I still prefer to pull this sore tooth out
> right now, though.

So messy.

>> Or even better, implement the change right after a major
>> release so that master and maint would have been correctly positioned
>> from the start.
>>
>> I have a few single-commit local branches that I will simply have to
>> cherry-pick to the new world order.
>>
> No, just rebase them on the new name of the branch they were based on;
> that is, if they were based on 'master', they are now to be considered
> based on 'next', if they were based on 'maint', they are now to be
> considered based on 'master', and if they were based on 'branch-1.13.2'
> they are not to be considered based on 'maint'.

( s/are not to/are now to/ )

Yes, that works. But it is a nuisance.

>> Or is there some better way to move
>

Re: bug#13578: [IMPORTANT] Savannah issues

2013-02-27 Thread Peter Rosin
On 2013-02-26 19:30, Stefano Lattarini wrote:
> Hi Peter.
> 
> On 02/26/2013 12:53 AM, Peter Rosin wrote:
>> On 2013-02-25 10:16, Stefano Lattarini wrote:
>>
>>>
>>> Note that the users can avoid branch-rewriting issues by renaming their
>>> 'master' to 'next' and their 'maint' to 'master' before pulling.  This
>>> should probably be stated in a message (on list *and* on savannah news)
>>> advertising the new versioning and branching scheme (message not yet
>>> written; it will be once the current issue is sorted out).
>>
>> Hiding stuff like that in some documentation or on a mailing list will
>> not help. You should make it *easy* for people to work on and contribute
>> to automake. Forcing everyone to do a bunch of silly boring renames
>> is not *easy*. It's an obstacle, and obstacles make people nervous
>> and uneasy. Not good, and no, you can't document it away.
>>
> You might have good points, and possibly even be completely right...
> But I must ask, why didn't you step up during the lengthy discussion
> about this change, nor objected during the delay (almost a week) that
> was deliberately let pass between the decision and the implementation
> -- precisely to let this kind of late objections to come out?  What is
> the point of having such discussions in the first place, if people who
> oppose a proposed change (maybe even on solid ground and with sound
> reasons) only object *after* the change has been discussed, accepted
> and implemented?

The long winding "eyes glossing over" discussion about version numbers
had nothing in it about branches, except the initial proposal which
stated:

  * None of 'maint', 'master' and 'next' should be rewindable.

I was not aware that 'master' and 'next' were rewindable before.

Then there was the last message before the implementation that stated:

In a couple of days, I will proceed with this "branch moving":

   * branch-1.13.2 -> maint
   * maint -> master
   * master -> next


No other message mentions git branches that I could find, but I might
have missed some instance.

Now, there are more than one way to "move branches". The most natural
is to merge your way forward, in fact that's the only one that makes
sense if the branches are *not* *rewindable*.

Thinking about this for a few minutes, I think I would have (with
better commit messages):
  # create 'next'
  $ git branch next master

  # update 'master'
  $ git branch new-master maint
  $ git checkout new-master
  $ git merge --strategy=ours master -m "rename maint -> master"
  $ git checkout master
  $ git merge new-master # a simple fast-forward
  $ git branch -D new-master

  # update 'maint'
  $ git branch new-maint branch-1.13.2
  $ git checkout new-maint
  $ git merge --strategy=ours maint -m "rename branch-1.13.2 -> maint"
  $ git checkout maint
  $ git merge new-maint # a simple fast-forward
  $ git branch -D new-maint

Forgive me for assuming that the branches would not be rewound.

Also, I was away skiing last week, but I wouldn't have caught this
even if I had been "present".

BTW, I assume you could still use the mid part to update master, instead
of waiting for the savannah crew to help you. You just have to replace
the first "git branch new-master ..." with

  $ git branch new-master b4dbcb75

(Because I think b4dbcb75 is what 'maint' was before you rewrote it.)

>>>> It's quite hostile to do non-fast-forwards
>>>> on branches as central as master and maint. And I think git/savannah
>>>> is rejecting them quite rightly!
>>>>
>>> Savannah is rejecting all non-fast-forward pushes (which I find annoying);
>>> but it didn't prevent me from deleting and recreating maint, a change that
>>> will still appear as a non-fast-forward to any clone of our repository.
>>>
>>> The reason it doesn't allow me to delete master as well is that doing so
>>> would prevent a "git clone" from checking out the sources of a freshly
>>> cloned automake, which can be very confusing (and of course, git cannot
>>> be aware of the fact that I intend to re-create 'master' just after
>>> having deleted it).
>>
>> The reason is irrelevant. non-fast-forwards of central branches is evil.
>>
> Mostly, yes.  This time, considering that no commits were actually being
> dropped or rewritten, I believed it wasn't not that bad, and was IMHO
> justified by the new improved versioning and branching scheme.
> 
&g

Re: bug#13578: [IMPORTANT] Savannah issues

2013-02-27 Thread Peter Rosin
On 2013-02-27 10:28, Peter Rosin wrote:
> The long winding "eyes glossing over" discussion about version numbers
> had nothing in it about branches, except the initial proposal which
> stated:
> 
> * None of 'maint', 'master' and 'next' should be rewindable.
> 
> I was not aware that 'master' and 'next' were rewindable before.

I meant 'maint', not 'next', in the last sentence. Of course.

Cheers,
Peter




Re: bug#13578: [IMPORTANT] Savannah issues

2013-02-27 Thread Peter Rosin
On 2013-02-27 11:29, Stefano Lattarini wrote:
> On 02/27/2013 10:28 AM, Peter Rosin wrote:
>>
>> [SNIP]
>>
>> The long winding "eyes glossing over" discussion about version numbers
>> had nothing in it about branches, except the initial proposal which
>> stated:
>>
>>* None of 'maint', 'master' and 'next' should be rewindable.
>>
> It also stated:
> 
>   I also propose the following change to the branching scheme currently
>   implemented in the Automake Git repository:
> 
>   * The 'maint' branch will be reserved to cut of the next micro
> release; so it will just see fixes for regressions, trivial
> bugs, or documentation issues, and no "active" development
> whatsoever.
> 
>   * The 'master' branch will be where the development of the next
> minor release will take place; that is, a sort of "middle-ground"
> between the roles so far fulfilled by the 'maint' and 'master'
> branches in the current branching scheme.
> 
>   * The (new) 'next' branch will be reserved for the development
> of the next major release; it will basically take over the rule
> that is currently fulfilled by the 'master' branch.
> 
> I thought that was making clear that the then-current 'maint' and
> 'master' branches would have needed to be renamed in order to implement
> that new scheme.  But re-reading the above, I realize I wasn't making
> that clear at all (it sounded clear to me because the details were
> fresh and clear in my mind then).

It also didn't state *when* the roles would change. The natural point
is after a major release when master and maint have converged
anyway.

>> I was not aware that 'master' and 'next' were rewindable before.
>>
>> Then there was the last message before the implementation that stated:
>>
>>  In a couple of days, I will proceed with this "branch moving":
>>
>> * branch-1.13.2 -> maint
>> * maint -> master
>> * master -> next
>>
>>
>> No other message mentions git branches that I could find, but I might
>> have missed some instance.
>>
>> Now, there are more than one way to "move branches". The most natural
>> is to merge your way forward,
>>
> Not in this case, as 'master' had several commits lacking in 'maint'.

See below.

>> in fact that's the only one that makes
>> sense if the branches are *not* *rewindable*.
>>
>> Thinking about this for a few minutes, I think I would have (with
>> better commit messages):
>>   # create 'next'
>>   $ git branch next master
>>
>>   # update 'master'
>>   $ git branch new-master maint
>>   $ git checkout new-master
>>   $ git merge --strategy=ours master -m "rename maint -> master"
>>
> This would have obtained the wrong effect; what was in master before my
> attempted renaming shouldn't have landed in the new 'master', but only
> in 'next'.  In the new 'master', we only wanted what was in the old 'maint'.

The functional changes would not have appeared on the new branch (due to
the 'ours' strategy), but yes, the commits would appear to be present on
the new branch even if their corresponding changes would not, and I guess
the generated ChangeLog would have been wrong too (listing changes that
simply aren't there).

The nice thing with assigning new roles using merges is that the history
is transparent. It will show exactly what has happened for anyone that can
be bothered to look.

But as I said, I only thought about it for a few minutes...

>>   $ git checkout master
>>   $ git merge new-master # a simple fast-forward
>>   $ git branch -D new-master
>>
>>   # update 'maint'
>>   $ git branch new-maint branch-1.13.2
>>   $ git checkout new-maint
>>   $ git merge --strategy=ours maint -m "rename branch-1.13.2 -> maint"
>>   $ git checkout maint
>>   $ git merge new-maint # a simple fast-forward
>>   $ git branch -D new-maint
>>
> Same issue as above.

Add these "abnormal" merges, and I think all would have been fine
(apart from the generated ChangeLog mentioned above):

$ git checkout master
$ git merge --strategy=ours maint
$ git checkout next
$ git merge --strategy=ours master

But, I don't actually think this branch restructuring was a good
idea at all at this point in time. Branch reorganizations are
better done in conjunction with a new major r

Re: bug#13578: [IMPORTANT] Savannah issues

2013-02-28 Thread Peter Rosin
On 2013-02-28 00:39, Stefano Lattarini wrote:
> On 02/28/2013 12:00 AM, Peter Rosin wrote:
>>
>> [SNIP]
>>
>> What I meant was that you can use (some of) my above proposed merges
>> to go forward with the new role for master instead of requiring help
>> from Savannah to allow rewriting master.
>>
> So... now are you ok with *completing* my branch renaming instead
> of reverting the part of it that has already been done?  Puzzled...

Personally, I will adapt to whatever you do. My objection was never
about me personally, since I was aware of what took place and got
clued in by your message about the troubles rewriting master. I am
mostly just baffled that you even consider branch rewriting to be
an option at all. If it weren't for the Savannah issues, I would
probably have missed it, because I don't read automake-commit very
carefully. Barring that "Savannah issues" mail, I would probably have
first noticed the change when I pulled the next time (who knows when
that would have happened, I don't pull the automake repo just for
thrills). And I would have been extremely surprised by the failed
merge during that pull.

Who knows how many there are out there with a clone of the repo, but
not following the mailing lists very carefully? Those are the ones
I'm thinking about, and I think you should too. But since I'm not the
maintainer, I will not have to face them when they have wasted time
trying to figure out what has happened when their next pull fails.
In other words, what to do next is your call.

>>> As I said, if you reach a consensus on that (and I guess you will),
>>> feel free to go ahead with that.  No objection from me.
>>
>> You are the maintainer, I'm just stating my opinion. I honestly don't
>> know what I think is best to do now, when the rewriting has already
>> started but not yet completed. I guess it's your mess, and I don't
>> really want to take responsibility for it by stepping in and trying
>> to clear it up. I.e., I will only offer my opinion at this point.
>>
> Fine, I'll revert the partial branch renaming when I have time to do
> that with enough care and attention to avoid another half-done botch-up
> (might be few days or a week or more; please don't push to the repo in
> the meantime).

A second rewrite "undoing" (quotes here since the rewrite can't be
undone, and me and probably others as well will have to adjust the
local repo a second time) the first is probably the lesser evil,
even if it is another branch rewrite.

Cheers,
Peter




Re: C++ and .cp extension

2013-05-19 Thread Peter Rosin
On 2013-05-19 18:57, John Andreasson wrote:
> Hi.
> 
> I have an old C++ project that I'm modernizing, and part of the process
> involves migrating to Autotools.
> 
> All source files uses the .cp extension. I know it's not common, but many
> compilers recognize it as C++. Automake doesn't do that however.
> 
> I guess I could rename all of them to end in something that Automake
> recognizes, like .cc. But I would like to ask if there's a way to make
> Automake accept .cp first?
> 
> I have searched the documentation, but all I can find is the list of
> supported file extensions. I think there must be a way to explicitly tell
> it to use other extensions as well.

There are compilers that does not like 'strange' extensions, so in order
to be maximally portable you should stick to something standard like .cpp
or .cc, which is just the other side of your statement in the second
paragraph I guess (many != all).

I know this doesn't answer your question about Automake and .cp, but I
don't have an answer to that, sorry.

Cheers,
Peter




Re: [FYI] {micro} tests: remove some code duplication

2013-05-22 Thread Peter Rosin
On 2013-05-22 15:57, Stefano Lattarini wrote:
> * t/ax/am-test-lib (null_install): New function.
> * t/instdir-java.sh: Use it instead of copied & pasted code.
> * t/instdir-lisp.sh: Likewise.
> * t/instdir-ltlib.sh: Likewise.
> * t/instdir-prog.sh: Likewise.
> * t/instdir-python.sh: Likewise.
> * t/instdir-texi.sh: Likewise.
> * t/instdir.sh: Likewise.
> * t/instdir2.sh: Likewise.

Hi!

Reading about micro releases in HACKING:

* Micro releases should be just bug-fixing releases; no new features
  should be added, and ideally, only trivial bugs, recent regressions,
  or documentation issues should be addressed by them.

Looking at this change (and possible others, I haven't checked
closely) I don't see the fit.

Now, I'm not complaining, but the above wording gave me the impression
that micro releases would contain very few commits, but it seems
that commits to micro are a lot more frequent than I anticipated.

The disconnect is perhaps that changes to tests are not explicitly
mentioned? And since "non-trivial but mostly safe" cleanups are
allowed in minor releases, I would assume that trivial cleanups
fit in micro as well? Maybe that should be mentioned explicitly?

Anyway, I don't really care, I'm just a bit surprised...

Cheers,
Peter




Re: Micro releases and testsuite work

2013-05-22 Thread Peter Rosin
On 2013-05-22 20:14, Stefano Lattarini wrote:
> Hi Peter.
> 
> On 05/22/2013 06:35 PM, Peter Rosin wrote:
>> On 2013-05-22 15:57, Stefano Lattarini wrote:
>>> * t/ax/am-test-lib (null_install): New function.
>>> * t/instdir-java.sh: Use it instead of copied & pasted code.
>>> * t/instdir-lisp.sh: Likewise.
>>> * t/instdir-ltlib.sh: Likewise.
>>> * t/instdir-prog.sh: Likewise.
>>> * t/instdir-python.sh: Likewise.
>>> * t/instdir-texi.sh: Likewise.
>>> * t/instdir.sh: Likewise.
>>> * t/instdir2.sh: Likewise.
>>
>> Hi!
>>
>> Reading about micro releases in HACKING:
>>
>> * Micro releases should be just bug-fixing releases; no new features
>>   should be added, and ideally, only trivial bugs, recent regressions,
>>   or documentation issues should be addressed by them.
>>
>> Looking at this change (and possible others, I haven't checked
>> closely) I don't see the fit.
>>
> Indeed, I think testsuite refactorings should be added to the list
> above (see patch below). After all, causing possible disruption or
> spurious failures in the testsuite is not nearly as serious as the
> introduction of a regression or a backward-incompatibility.  In fact,
> I'd even rather see such a disruption exposed early, in a micro release
> (where we can be much more confident about the overall correctness and
> health of the code base) than in a minor or major release, where it might
> be more difficult to discriminate between issues caused by work on
> the code and issues caused by work on the testsuite.
> 
>> Now, I'm not complaining, but the above wording gave me the impression
>> that micro releases would contain very few commits, but it seems
>> that commits to micro are a lot more frequent than I anticipated.
>>
> So far, apart from a minor bug fix, all the commits on 'micro' has
> been testsuite-related.  That is how it should be IMHO.  But you are
> right that this wasn't made clear at all in HACKING.
> 
>> The disconnect is perhaps that changes to tests are not explicitly
>> mentioned?
>>
> And that they do not influence how Automake will work on real-world
> packages.  That is, a testsuite regression is not going to annoy or
> inconvenience Automake users, but only the Automake developers.

Is that really true? If some silly mistake in a cleanup/refactoring
of the testsuite causes forty-eleven-something fails on a platform
noone has bothered to check, a user on that platform is going to be
less than impressed and would probably not trust the new micro version
and would thus be inconvenienced indeed. My opinion is that it is
best to make as few changes as possible for a micro release, while
still fixing the things that needs to be fixed of course. But hey,
your call obviously...

>> And since "non-trivial but mostly safe" cleanups are
>> allowed in minor releases, I would assume that trivial cleanups
>> fit in micro as well?
>>
> Not really.  And in this case, the testsuite refactoring done in
> 'micro' so far are actually not very safe either (it took me a
> while to write them in a way that left the testsuite passing on
> FreeBSD, NetBSD and Solaris 10).  What makes them acceptable is
> that they touch only the testsuite.
> 
>> Maybe that should be mentioned explicitly?
>>
> The patch below should make things clearer.  WDYT?

Yes, now the text in HACKING matches the commits on 'micro'.

>> Anyway, I don't really care, I'm just a bit surprised...
>>
> I hope thsi reply of mine clarifies things.

Indeed, thanks!

Cheers,
Peter




Re: More control over 'make dist'

2016-09-14 Thread Peter Rosin
On 2016-09-14 11:33, Michal Privoznik wrote:
> Dear list,
> 
> I'm a libvirt devel and I've ran into interesting problem. I'd like to
> hear your opinions on it.
> 
> Libvirt is a virtualization library that uses XML to store a virtual
> machine config. We have couple of tests in our repository too that check
> whether XML configs are valid, and whether some operations change it in
> expected way. Some operations don't change the XML at all, in which case
> we just symlink the output file to point to the input file:
> 
> ln -s $xml.in $xml.out
> 
> However, I was looking into archive produced by 'make dist' the other
> day and found out that the symlinks are not preserved. I've traced down
> the problem and found that autoconf is just hardcoding some of tar's
> options. Namely -chf. Yes, it is -h that causes a symlink to be
> dereferenced.
> 
> So my question is, what do you think of making -h configurable? We could
> add new tar-* option to AM_INIT_AUTOMAKE, say tar-symlinks, which would
> suppress -h on tar's command line.
> 
> What are your thoughts?

I believe that is for the benefit of supporting unpacking the release
tarball on systems that do not support symlinks, or where symlinks are
not as flexible as one might wish for.

Cheers,
Peter



cl -c -o trouble in libtool am-subdir.at

2009-01-21 Thread Peter Rosin

Hi!

For some years now, I've been working on and off on adding MSVC
support w/o wrapper scripts to libtool (see the pr-msvc-support
branch in libtool git) and have run into an issue that has been
brought up here before.

http://lists.gnu.org/archive/html/automake/2007-06/msg00083.html

I was trying to fix the am-subdir.at test in the libtool testsuite.
That test uses C++ and compiles in subdirs, so it's really no big
surprise that MSVC is in trouble there.

In short, MSVC w/o wrapper needs AM_PROG_CXX_C_O for that test to work
(a few other minor tweeks might also be needed, methinks...).

Can such a macro be added to automake, please?

Cheers,
Peter (not subscribed, please CC)





make distcheck is not using the specified compiler

2010-06-19 Thread Peter Rosin

Hi!

I'm trying to get in position for running the testsuite on MSYS using
the Microsoft C/C++ Compiler (and don't really know what to expect).

As a first step I tried this on a fresh checkout:

./bootstrap
./configure CC=cl CFLAGS=-MD CXX=cl CXXFLAGS=-MD
make

and that triggered a distcheck of amhello in the doc subdir. What
surprised me was that even though I specified a specific compiler,
that selection didn't find its way to the distcheck. The distcheck
has this in the output of its configure run:

checking for a BSD-compatible install... /bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking for gcc... gcc
checking for C compiler default output file name... a.exe
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables... .exe
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for style of include used by make... GNU
checking dependency style of gcc... gcc3
configure: creating ./config.status
config.status: creating Makefile
config.status: creating src/Makefile
config.status: creating config.h
config.status: executing depfiles commands

Notice the use of gcc.

The configure script has the following (admittedly boilerplate)
text in its --help output:

Usage: ./configure [OPTION]... [VAR=VALUE]...

To assign environment variables (e.g., CC, CFLAGS...), specify them as
VAR=VALUE.  See below for descriptions of some of the useful variables.

So, I think CC=cl should be propagated...



Further, if I work around the above by exporting CC and CFLAGS etc
instead, the make distcheck fails (or maybe some other dist related
make target) since MSVC creates a hello.exe.manifest file that isn't
cleaned up. These are the last lines of output:

make[4]: Entering directory 
`/c/cygwin/home/peda/automake/git/testing/doc/amhello/amhello-1.0/_build'
test -z "" || rm -f
test . = ".." || test -z "" || rm -f
rm -f config.h stamp-h1
rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
rm -f cscope.out cscope.in.out cscope.po.out cscope.files
make[4]: Leaving directory 
`/c/cygwin/home/peda/automake/git/testing/doc/amhello/amhello-1.0/_build'
rm -f config.status config.cache config.log configure.lineno 
config.status.lineno
rm -f Makefile
ERROR: files left in build directory after distclean:
./src/hello.exe.manifest
make[3]: *** [distcleancheck] Error 1
make[3]: Leaving directory 
`/c/cygwin/home/peda/automake/git/testing/doc/amhello/amhello-1.0/_build'
make[2]: *** [distcheck] Error 1
make[2]: Leaving directory 
`/c/cygwin/home/peda/automake/git/testing/doc/amhello'
make[1]: *** [amhello-1.0.tar.gz] Error 2
make[1]: Leaving directory `/c/cygwin/home/peda/automake/git/testing/doc'
make: *** [all-recursive] Error 1

Now, I can get past this if I rm the offending file with the correct
timing but I fear that I will have the same trouble with "real" projects.
It would be nice to be able to distcheck projects with MSVC so I'm
asking if it would be possible to clean up that file. I'm not that
familiar with neither perl nor the codebase, otherwise I would have
proposed a patch, but I'm sure someone can come up with the (expected)
one-liner pretty quickly if it's deemed OK to zap
%PROGRAM%%EXEEXT%.manifest (or however it's spelled) during make clean.

But perhaps there are projects that provide a manually written
manifest file? In that case it would be pretty evil to zap the file.
Would it be possible to detect the difference between a manually
written manifest file and an auto-generated one automatically?
Maybe not zap it if it's older than the executable?

Cheers,
Peter

(1) http://lists.gnu.org/archive/html/automake-patches/2010-06/msg00082.html



Re: REĀ : call for help/crazy idea: nmake sup port

2010-08-17 Thread Peter Rosin
Den 2010-08-13 19:18 skrev Ralf Wildenhues:
> I would like to thank everyone who provided input on this topic.
> It certainly helps when considering where to go.  One conclusion
> from this is that we should get Peter's MSVC support finished
> and completed for Automake 1.12 and the next Libtool release.
> 
> I wasn't aware that there are more MSVC-related build system tools
> which one could target.  I'm still not sure whether the idea to
> produce support for some of them should be buried completely, but
> I for one won't be pursuing it in the nearer future.  If somebody
> else feels scratching that it however ...
> 
> And of course I would be delighted if some of you provided fixes
> for pkg-config and whetever else is needed to make building for
> this setup work better.

Hi!

Sorry for the late reply.

A couple of things were never mentioned in this thread...

First I want to clarify that nmake support is not the same as
Visual Studio project file support. The project file can be used
to generate an nmake Makefile, but that is not "normal usage".
Normally you just load the project file into the GUI and go.
I don't know if there were anybody who had this wrong, I'm just
spelling it out just in case...

And second, in my opinion it is more important to not require
GNU binutils when using the MS toolchain, than to not require
a sensible make. We are almost there, but not completely. The
reasoning is that the make implementation should not affect the
end result, but mixing in tools from GNU binutils might. Most
of the remaining required uses of GNU binutils are not actually
doing anything to the binaries, they simply inspects them, but
one exception is the resource compiler.

This is an issue where automake could possibly help, since
Microsoft rc isn't really interface compatible with windres.
Microsoft rc produces a .res file, while GNU windres produces
a .obj (or .o) file. There's also the inevitable shuffle of
options... In a project of mine I have this in my
configure.ac:

AC_CHECK_TOOL(RC, windres,)
AC_ARG_VAR(RC, [Resource Compiler for Windows resources])
RC_VERSION

case $RC in
[[rR][cC]]*) msrc=yes ;;
*) msrc=no ;;
esac
AM_CONDITIONAL(HAVE_MSRC, test $msrc = yes)


And this in my Makefile.am:

if HAVE_WIN32RES
if HAVE_MSRC
foo_LDADD += foo.res
else
foo_LDADD += foo.$(OBJEXT)
endif
endif

if HAVE_MSRC
foo.res: foo.rc foo.ico src/resource.h
$(RC) -i $(top_srcdir)/src foo.rc
else
foo.o: foo.rc foo.ico src/resource.h
$(RC) --include-dir $(top_srcdir)/src foo.rc $@

foo.obj: foo.rc foo.ico src/resource.h
$(RC) --include-dir $(top_srcdir)/src foo.rc $@
endif


I would love to simplify that. I expect similar issues with
the message compiler (Microsoft mc vs. GNU windmc) but I haven't
needed a message compiler so I can't say...

Cheers,
Peter



Static libraries not following the libfoo.a naming convention

2010-09-23 Thread Peter Rosin
Hi!

I have been wondering when I was going to run into this problem, and
now it has happened (in the Libtool testsuite, tests/demo-deplibs.test).

Automake has rules for creating static libraries like so (taken from
that test case):

EXTRA_LIBRARIES = libhell0.a
libhell0_a_SOURCES =
libhell0_a_LIBADD = hello.$(OBJEXT) foo.$(OBJEXT)

That will create rules to create libhell0.a. The test case then has
rules for consuming that library, like so:

EXTRA_LTLIBRARIES = libhell1.la libhell2.la
libhell1_la_SOURCES = hell1.c
libhell1_la_LIBADD = -L. -lhell0
libhell1_la_LDFLAGS = -no-undefined -rpath $(libdir)
libhell1_la_DEPENDENCIES = libhell0.a
libhell2_la_SOURCES = hell2.c
libhell2_la_LIBADD = -L. -lhell0
libhell2_la_LDFLAGS = -no-undefined -rpath $(libdir)
libhell2_la_DEPENDENCIES = libhell0.a

"-L. -lhell0" is the key here. That is expected to pick up libhell0.a,
also notice the *_DEPENDENCIES, which lists the expected library file
name explicitly.

Now, that was all familiar territory, enter MS tools...

MS tools do not expect libraries to end with .a, and the convention
is to not prefix the library name with lib.  Which is why the compile
script, when used as MSVC wrapper, translates -lhell0 into hell0.lib
(and not libhell0.a).  Side note, Libtool creates static archives with
the hell0.lib form for MSVC.

compile *could* be extended to also look for the libhell0.a form, but
that is not really pretty, the compiler driver will say stuff like:

$ cl -nologo hell.obj libhell0.a
cl : Command line warning D9024 : unrecognized source file type 'libhell0.a', 
object file assumed

It works though, so that's the easy out I suppose. But me not like.
Libraries should simply not be named libhell0.a on this platform.

Would it be possible for Automake to create static libraries of the
hell0.lib form, when it sees "*_LIBRARIES = libhell0.a"? Since there
might be 3rd party dependencies assuming the libhell0.a form, it
might be good if Automake also copied the hell0.lib file to
libhell0.a after creating libraries of the hell0.lib form to satisfy
those dependencies. (Or a symlink might suffice?)

Cheers,
Peter



Re: Static libraries not following the libfoo.a naming convention

2010-09-23 Thread Peter Rosin
Den 2010-09-23 20:59 skrev Ralf Wildenhues:
> Hi Peter,
> 
> * Peter Rosin wrote on Thu, Sep 23, 2010 at 10:01:16AM CEST:
>> I have been wondering when I was going to run into this problem,
> 
> Me too.

:-)

>> and
>> now it has happened (in the Libtool testsuite, tests/demo-deplibs.test).
> 
>> Automake has rules for creating static libraries like so (taken from
>> that test case):
>>
>> EXTRA_LIBRARIES = libhell0.a
>> libhell0_a_SOURCES =
>> libhell0_a_LIBADD = hello.$(OBJEXT) foo.$(OBJEXT)
> [...]
> 
>> Would it be possible for Automake to create static libraries of the
>> hell0.lib form, when it sees "*_LIBRARIES = libhell0.a"?
> 
> Yes, I think that should be possible in principle.  At least as long as
> all possible libraries to be built are known at automake run time.
> (This means, that if there are any @substitutions@ in *_LIBRARIES
> variables, as for example in tests/condlib.test tests/subst3.test
> tests/substtarg.test, then all possible values must either be listed
> in some EXTRA_*LIBRARIES variable, or the user must specify a rule to
> update the library.)
> 
>> Since there
>> might be 3rd party dependencies assuming the libhell0.a form, it
>> might be good if Automake also copied the hell0.lib file to
>> libhell0.a after creating libraries of the hell0.lib form to satisfy
>> those dependencies. (Or a symlink might suffice?)
> 
> I haven't made up my mind yet on the exact semantics.  There are a few
> possibilities, but portability and practicality will probably rule some
> of these out.
> 
> ATM I'm wondering whether completing AM_PROG_AR first or looking into
> this would be better.  AM_PROG_AR creates lots of fallout in Libtool
> beside the remaining work in Automake, and fixing Libtool so that it
> works well with older and newer Automake isn't exactly trivial.

Hi Ralf,

Yes, let's put this at the back of the queue.  I just figured I'd bring
it up, and it was nice to hear that it isn't completely impossible.

Cheers,
Peter



Re: Automake and AR

2011-01-04 Thread Peter Rosin
Den 2011-01-04 16:23 skrev NightStrike:
> On Thu, Dec 9, 2010 at 10:11 AM, NightStrike  wrote:
>> On Sat, Nov 27, 2010 at 10:25 AM, NightStrike  wrote:
>>> On Sun, Oct 31, 2010 at 9:37 AM, NightStrike  wrote:
 On Fri, Oct 22, 2010 at 1:07 PM, NightStrike  wrote:
> On Wed, Mar 3, 2010 at 3:51 PM, Ralf Wildenhues  
> wrote:
>> * NightStrike wrote on Wed, Mar 03, 2010 at 06:59:53PM CET:
>>> Automake somehow defines AR to 'ar'.  I'm not sure where this comes
>>> from, but I do know that it's definitely not $host-ar, as I would
>>> expect.
>>>
>>> Is this an automake bug, or user error?
>>
>> Looks like an automake bug to me.  Just putting
>>  AC_CHECK_TOOL([AR], [ar], [false])
>>
>> somewhere in configure.ac should serve as a workaround though.
>>
>> Thanks for the report, will fix,
>> Ralf
>>
>
> Was this ever fixed?  What version of automake will it be in?
>

 Ping

>>>
>>> Ping
>>>
>>
>> Ping x3 :)
>>
> 
> Ping x4

Just a silly question since nothing else is happening, do you even have
$host-ar somewhere on your path?

Cheers,
Peter