On 17/05/2017 15:13, Chris Angelico wrote:
On Wed, May 17, 2017 at 11:53 PM, bartc <b...@freeuk.com> wrote:
That's all true. But the answer is not to make it a nightmare for everyone
else as well as yourself. If the requirement is to get other people to build
your product from source for the purpose of using it or testing it (and for
various reasons using prebuilt binaries is not an option), then the process
ought to be as painless as possible.
What, you mean like this?
./configure
make
sudo make install
No, not like that. I mean genuinely simple. Your example:
(1) Doesn't work on Windows
(2) Usually seems to involve executing 20,000 to to 30,000 lines of
complete gobbledygook in that configuration script.
That can't possibly be justified.
One project I tried to compile comprised only a dozen modules, but still
required nearly 1000 lines of incantations in the combined configure and
makefile scripts to build it (which didn't work as it waS Windows).
I think the bulk of open source software can be built using those steps.
Maybe, if within Unix or Linux.
What would happen if you were presented with a machine, of some unknown
OS except that it's not Linux, and only had an editor, a bare compiler
and linker to work with? Would you be completely helpless?
Suppose then the task was to run some Python, but you only had the bare
sources: .c and .h files, and whatever .py files come as standard; where
would you start? Would that 18,000-line configure script come in handy,
or would it be no use at all without your usual support systems?
And you're accusing /me/ of being in a bubble!
Compare ease of compiling with building other C compilers from source. (And
actually, this can compile itself, some 17Kloc, in a few tens of
milliseconds.)
In other words, it's only targeting one single CPU architecture and OS
API.
And? My gcc installation only generates code for x86, x64. So does my
Pelles C**. So does my lccwin**. So does my DMC (only x86). So does my
Tiny C**. So does my MSVC2008 (x86 only). All also only generate code
for Windows.
(** Those use separate installations to achieve those two targets.
lccwin is one installation but uses different sets of files.)
If I wanted to build gcc for example from sources, then I need to
download and grapple with a package containing 100,000 files, including
tens of thousands of source files, even if I'm only interested in one
target. /That/ is supposed to be better?
Anyway adding another target is no big deal. (I've targeted pdp10, z80,
8086, 80386[x86] on previous compilers, sometimes as asm, sometimes as
binary. Also C source code. I written assemblers for z80, 8051, 8086,
80186, 80386[partial] all generating binary.)
The Win64 ABI thing is a detail. Win64 API uses 4 registers to pass
parameters, and requires a shadow stack space; Linux uses 6 registers
and no shadow space, and handles XMM a bit differently. I think that's it...
Also, it emits nasm code, so if you actually want a binary, you
can't consider this complete; the size of nasm + linker needs to be
included.
If I download Clang for Windows, then it doesn't emit anything at all!
In fact it can't compile anything. Because it relies on other compilers
(gcc or MSVC, but it has to be the right one) to supply header files,
linkers, everything it needs.
Clang is a 64MB executable.
But yes, Nasm is a poor match for my compiler, and unsatisfactory. The
next step is to eliminate that and the need for a linker. Then this
'mcc' project can run applications from source just like Python.
It's funny though that when /I/ stipulate third party dependencies
[small, self-contained ones like nasm or golink], then it's bad; when
other people do it [massive great ones like vs2015 or git] then that's good!
And I can only conclude from your comments, that CPython is also
incomplete because it won't work without a bunch of other tools, all
chosen to be the biggest and most elaborate there are.
Yeah, it's not hard for a small program to produce valid
code for one target system. How good is the resultant code? Can you
outperform other popular compilers?
For running my language projects, it might be 30% slower than gcc-O3,
about on a par with compilers such as DMC, lccwin, and PellesC, and
generally faster than Tiny C. [Here I'm mixing up results from my C
compiler and my non-C compiler; I've can't be bothered to do any
exhaustive testing because you'll simply find another way of belittling
my efforts.]
Look at the link for sqlite amalgamation I posted earlier today. On that
site, it says they extensively use TCL to generate some of the sources. To
build sqlite conventionally, would require someone to install a TCL system.
Their amalgamated version thankfully doesn't require that.
Oh, you mean like how tkinter embeds tcl? You can't possibly do that
in a production language, can you.
No, I mean having to install some language system that is of no
relevance whatsoever to a developer, other than being a stipulation of
some third party add-on.
If you were paying someone by the hour to add sqlite to a project, would
you rather they just used the amalgamated file, or would you prefer they
spent all afternoon ******* about with TCL trying to get it to generate
those sources?
--
bartc
--
https://mail.python.org/mailman/listinfo/python-list