On 03/02/2015 05:33 PM, Tor Myklebust wrote:
This is indeed true, but it seems like a social problem rather than a
technical problem. People can, and will, write garbage software no
matter what tools they have. It might pay to let them do this with as
little pain as possible so they can go back to working on the thing
they were actually interested in doing.
It sounds here like you don't like that your X application is running
scripts, probably incorrectly, to do basic stuff. And how running a
script can fail, with error codes like "too many open files" or "too
many processes" or "not enough memory" when the task in question
doesn't obviously involve allocating memory or spinning up a process.
If you're in the mood to lay blame in a situation like this, I sugges
directing it at the X application's author rather than the author of
the script the application calls.
I'm not pointing fingers at any particular author, just the prevailing
wisdom of the day.
I think the answer to this question is more complicated than can be
described by a "tipping point" in a cost-benefit analysis. I think
it's context-dependent and I think it can change over the lifetime of
the software.
Performance concerns, for instance, often crop up later in the
lifetime of a project once it has a sizable userbase (or once somebody
asks for the resource-intensive feature enough times). Should we code
everything in C from the start just so we don't have to handle the
performance problems in the unlikely event that the project succeeds?
Maybe, but what if that constraint makes the project take quite a bit
longer? And what if that reduces its chances of success?
If programming were that "black and white" I would agree with you.
However, C in general is the language in which other languages are
written. The ease with which you can link C object code to whatever
language you happen to using pretty much renders that concern moot.
Performance concerns can cut the other way, too, under the name
"scalability." Because it's easier to write really fast C code, you
can get a lot farther with your really fast C code before hitting a
performance wall. That sounds good, but it means your code can evolve
into a bigger mess by the time you have to address your choice of data
structures, or parallelisation, or distribution.
I've seen plenty of scaling arguments, and some of them are perfectly
valid arguments, but many of them are also just suppositions based on
the idea that C can't OOP.
It's interesting that you'd mention Java here. I don't much like the
Java language or the Java programming culture, but Java bytecode has
the interesting property that, with a little plumbing, one can send
executable code over the network and have it run on a remote machine.
This actually winds up being useful for large-scale data crunching,
where you want to move the code to the data rather than the data to
the code wherever possible. I wouldn't know how to build a system
that does this in C (for instance) that isn't brittle.
There is no magic to it. Java's core is usually written in C after all.
Realistically, the reason Java can do that is that Java bytecode is
processor generic. You could theoretically do that with C as long as
the processors are the same.
It depends on what the utility is. C does not support certain useful
forms of abstraction available in other languages. (I'm not talking
about inheritance here. Generics and mixins are to my knowledge both
impossible to do in a performant and syntactically transparent way in
C. Ditto for anonymous functions. The way you emulate tagged unions
in C---a struct with a tag field and a union member---is a little
scary in any large code base because incomplete 'switch' statements
won't raise compile-time warnings.)
Generics are little more than data in a buffer, IMHO. All you need do
is cast them as you see fit. The fact that other languages offer you
syntactic sugar to do it, is just fine too. The real comment is that
the language you refer to is not more capable. You are running it on
the same hardware. The difference is that the other language just makes
it easier for you in some cases, not necessarily better.
I think you have aimed your criticism at the wrong target. It is
annoying that "new" and "user-friendly" have both become synonymous
with "does not work under heavy load or unusual conditions" in Linux.
It wasn't always that way. But I would look toward the people
building brittle stuff that instead of the guy who wrote adduser if I
wanted to diagnose the problem.
I would rather address the disease than the symptom. I'd rather see
that the root cause: the cathedral of dependencies gets taken down a
notch. Doing that starts at the beginning, IMHO.
_______________________________________________
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng