Hi,

My _OPINION_ on those tradeoffs, compilation speed/optimization/speed of
execution/execution context, where "usually" I draw my red lines:

Use of makefiles: the main rational of makefiles is to re/compile/re/link only
what is needed to generate the final products and that in order to generate
those very final products faster than if it was done in a dumb sequence.

If the final products need some optimization due to their presumed execution
contexts, if the optimizing toolchain is too slow to re/generate the final
products, use of makefiles is appropriate, but if this is fast enough (true on
nearly all small projects), use of makefiles is overkill. Namely an idotic sh
script which compiles/links everything in a sequence is enough. You can have a
middle ground where you have a shell with job control which can do some dumb //
compilation/linking, which is sensible nowadays with our multi-core cpus.
Moreover, often, in the case of a dumb sh build script, with a programmer
editor, you can easily comment out/in what's needed in blocks, that even on
fairly big project in order to reach the "fast enough" use case.

If the final products will run fast enough in the presumed execution contexts,
a non optimizing toolchain may be enough. If so, the final products
re/generation with a dumb script may be enough on fairly bigger projects. We
are talking order of magnitude bigger projects, that due to the speed of
compilation/linking between different toolchains being "orders of magnitude"
different.
The real pb here, is that "optimizing toolchains", with optimization turned
off, are still "orders of magnitude" slower than dumb toolchains (maybe it got
better with the latest gcc).

Nowadays, high level script engines are often "fast enough" in all presumed
execution contexts, BUT there are tons of them, all figthing to become the only
one. This is a cancer in the open source world as many critical runtime/sdk
components now require _SEVERAL_, very expensive, high level script engines. If
I was microsoft/apple/oracle/_insert your fav silicon valley gangsta here_, I
would sabotage the open source world with c++ and similar, and tons of high
level script engines (mmmmmh....).

Now, GNU makefiles have features which help in "project configuration". It was
very true at the time when they were TONS OF F*CK*NG CLOSE SOURCE UNIXES (God
bless gnu/linux). Now, a directory tree of idiotic and simple scripts with some
options shall be enough... if there are too many options, usually it means the
project should be broken down (divide and conquier), and certainly not use one
of the massive SDK systems out there. Unfortunately, linux build system is a
user of GNU makefile extensions, due to its massive configuration
space and its size. I would not mind a linux build system boostraped with an
simple and idiotic build system toolchain (kconfig and kbuild), as it doesn't
actually need the heavy weight GNU make.

"SDK portability", nobody should dev on a closed source OS. Cross-compilation
is meant for that. So all the ruby/python/perl/js/etc SDKs can go to hell since
it makes more pertinent to dev on some closed source OSes, which is a
definitive no-no for open source OS supporters. The goal being going from
closed source OSes torward open source OSes, and certainly not to help, with an
additional technical burden, the other way around.

Ok, it is far from a perfect explanation, but it gives some clues on my way of
seeing things. The main guidelines are roughly speaking, "the less the better",
"divide and conquier", "small is beautiful", "and NO, comfort/portability
brought by high level and very expensive script engines does not justify
anything".

-- 
Sylvain

Reply via email to