On 05/30/2018 02:23 PM, Andy Armstrong wrote:
Hi Eric,
[top-posting is a bit harder to read on a technical list, so I'll stick
with the easier-to-read inline posting]
That could well be a far better approach. I am not sure how to perform cross
platform builds, especially taking into account the interesting platform
uniqueness such as codepage etc.
I'm not proposing cross-building (which is indeed a possibility, but a
much more technically challenging one to get correct, so it's never my
first suggestion), so much as having two machines:
- development machine: has autotools installed, has nano.git
[https://git.savannah.gnu.org/cgit/nano.git] checked out (or however you
are currently trying to build the latest nano); probably GNU/Linux, and
most likely using ASCII encoding
- build machine: your mainframe with EBCDIC encoding
On the development machine, run all of the prep steps, including any
required 'autoconf' and 'automake' steps, prior to './configure && make
dist', which creates nano-$vers.tar.$suff (where $suff is typically .gz
or .xz, depending on what compression the developers liked)
Then copy nano-*tar* file to the build machine (whether by ftp, or by a
shared network drive, or by whatever other means you have at your disposal)
Then on the build machine, unpack the tarball (perhaps involving some dd
invocations to transliterate from ASCII to EBCDIC as needed - but do so
in a manner which preserves timestamps - I honestly don't know what
steps are required in transferring an ASCII-encoded tarball into use on
an EBCDIC machine)
Now, on the build machine, you only have to type './configure && make';
you don't have to run autoconf or automake (well, as long as you didn't
mess up timestamps such that make thinks configure.ac is newer than
configure, or some similar spurious change in dependency).
In fact, the parts about the development machine have often already
taken place without requiring you to anything:
https://ftp.gnu.org/gnu/nano/ includes nano-2.9.7.tar.xz, which was
built precisely in the same manner that you would build it, by a
developer who had autotools on their machine and ultimately ran 'make dist'.
So, rather than building from nano.git, just build the pre-built tarball.
I have not used the 'make dist' argument before, but it would prerequisite that
I chose the appropriate configure and make arguments to compile for the
platform I am targeting.....thoughts?
'make dist' is SUPPOSED to produce an idempotent and complete tarball
(modulo noise like changes in timestamps), independent of what configure
options you used (if it doesn't, that's a bug in the particular package
you are trying to build). 'make distcheck' tries to check that this is
actually the case, by forcefully creating a tarball, faking a PATH with
no autotools, and performing a VPATH build from the tarball, to exercise
what an end user without autotools is likely to encounter (not all
developers use 'make distcheck' the way they should, but it's part of
the GNU Coding Standards, and automake tries to promote it for developers).
I am not even sure of the official 'name' of the system I am targeting? s390?
omvs? To be clear, I am targeting USS on zSystems, not Linux on z. Thoughts?
Best practices?
Running config.guess (which is included in many different tarballs,
again because automake finds it useful) should give the 'official' name
of your system, at least in terms that the autotools cater to.
Admittedly, very few people use USS on zSystems while ALSO trying to use
lots of GNU software, so it's highly likely that you have a LOT of
non-portable assumptions to overcome when porting open source software
to your platform (such as the 10-year-old autoconf bug you just
reported, where we regressed from something that worked to something
that tickles a latent bug in m4's eval that has been present even
longer). And actually fixing those bugs may require patching
configure.ac or Makefile.am, at which point the 2-machine approach
requires making the tweak, rerunning 'make dist', and copying over the
new tarball for each tweak you want to test. But having the separation
between devel and build machine, even if it involves a lot of iterative
copying, may still be easier than trying to have the build machine BE
the development machine, where you have to get an entire ecosystem of
software running before you can directly build from git instead of from
tarballs.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org