On 10/15/2013 2:26 PM, Miles Fidelman wrote:
Jerry Stuckle wrote:
On 10/15/2013 1:21 PM, Miles Fidelman wrote:
Jerry Stuckle wrote:
Programmers nowadays do not have to manage computer's memory too,
but it
seem that when they know how low level works they write more robust
programs.
Not necessarily. I've seen great programmers who don't know or worry
about the internals. And I've seen poor programmers who grew up
building their own hardware. There is little relationship between
knowledge of the underlying hardware and ability to program.
Unless you care about things like performance or resiliancy. Gaming, big
data analysis, real-time control, anything that does physical i/o, etc.,
etc., etc.
Application programmers do not do real-time control, physical I/O,
etc. Those are system programmers. Gaming is a very specialized
area, which most programmers never get into. Same with big data
analysis - although that is normally done on large supercomputers - or
at least mainframes (some people think 1M rows of data in a database
is "big").
None of which has anything to do with the OSI layers or programming.
They are all sysadmin functions.
Again, unless you need to write network code. Or write a distributed
application.
Again, system programmers. And even with a distributed application
you don't need to know about how the network works.
Ok, we're back into semantics. You're talking about "coders" as opposed
to "software engineers" - a very limited and low-level skill set, one
level above spreadsheet jockeys. Certainly not any kind of engineering
discipline. (Also, as a definitional aside, last time I checked,
"systems programming" referred to writing operating systems and such - a
very focused, albeit complicated, activity.)
What's a "coder"? In over 40 years of programming, I've met many
programmers, but no "coders". Some were better than others - but none
had "limited and low-level skill set". Otherwise they wouldn't have
been employed as programmers.
And "Systems Programming" has never mean someone who writes operating
systems; I've known a lot of systems programmers, who's job was to
ensure the system ran. In some cases it meant compiling the OS with the
require options; other times it meant configuring the OS to meet their
needs. But they didn't write OS's.
But then these were also programmers for large companies running IBM
mainframes with thousands of users.
In any case....
The programmers where I'm currently working - application systems for
buses (vehicle location, engine monitoring and diagnostics, scheduling,
passenger information) -- yeah, they have to worry about things like how
often vehicles send updates over-the-air, the vageries of data
transmission over cell networks (what failure modes to account for),
etc., etc., etc.
That's not "real time". "Real time" is when you get an input and you
have to make an immediate decision and output it. Motor controllers are
one example; constantly adjusting motor speed to keep a conveyor belt
running at optimum capacity as the load changes. Another is steering a
radiotelescope to aim at a specific point in the sky and keep it there.
Oh, and once they radiotelescope is properly aimed, process the
information coming from it and 30-odd others spaced over a couple of
hundred square miles, accounting for the propagation delay from each
one, and combining the outputs into one digital signal which can be
further processed or stored.
And worrying about the vagaries of data transmission over cellular
networks requires no network knowledge below the application level. In
fact, I doubt your programmers even know HOW the cellular network
operates at OSI layers 6 and below.
When I worked on military systems - trainers, weapons control, command &
control, intelligence, ..... - you couldn't turn your head without
having to deal with "real world" issues - both of the hardware and
networks one was running on, and the external world you had to interact
with.
I'm sorry your military systems were so unstable. But programmers don't
worry about the hardware.
If you think anybody can code a halfway decent distributed application,
without worrying about latency, transmission errors and recovery,
network topology, and other aspects of the underlying "stuff" - I'd sure
like some of what you've been smoking.
Been doing it for 30+ years - starting when I was working for IBM back
in '82.
Oh, and by the way, an awful lot of big data applications are run on
standard x86 hardware - in clusters and distributed across networks.
Things like network topology, file system organization (particularly
vis-a-vis how data is organized on disk) REALLY impacts performance.
That's what I mean about what people thing is "big data". Please show
me ANY X86 hardware which can process petabytes of data before the end
of the universe. THAT is big data.
Too many people who have never seen anything outside of the PC world
think "big data" is a few gigabytes or even a terabyte of information.
In the big data world, people laugh at such small amounts of data.
I might also mention all the folks who have been developing algorithms
that take advantage of the unique characteristics of graphics processors
(or is algorithm design outside your definition of "programming" as well?).
Miles Fidelman
What about them?
--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/525da590.7090...@attglobal.net