>
>
> Many businesses see switching to a new language or even using a new
> language side by side with their existing  language choice as a business
> risk.
>
>
>
Jespers approach to using-a-new-language-for-your-project™, a slightly
adopted version of Joe Armstrongs approach:

Companies, once they get up and running, are very susceptible to the idea
that risk is an evil which must be managed. Thus, anything but a startup,
or something which tries to be like a startup will eventually get dragged
towards a language monoculture: it is perceived to be easier to hire for,
and perceived to be easier to control. True, whenever you employ a new
programming language, you risk redoing the same work you already did. On
the other hand, a single language also means you buy into its strengths and
weaknesses, which means the monoculture has a side-effect of hurting the
business.

There is a best time for making a language switch. If you have a recent
project of the large and complex kind, which is bound to fail soon, you
have a good opportunity for a switch. By playing the strengths of the
language, by drinking coffee, and by your own perseverance, you can make
magic happen. Target the kernel of the problem space, write a small
solution in 72 hours as a tech demonstrator, and show management that the
crux of the problem—the reason the previous project failed—is solved in
this new solution. Once this gets traction, the choice of language plays a
minor role. In short, the project drives the adoption, not the language
itself.

The 72 hours can be extended to a couple of weeks if you have some 20%
time. The key is to get a result fast though, so you can start showing off
the achievement in the project. This is what gets traction in companies.
Solutions, not religion. Before biting, however, it is good to make some
experiments. Try rewriting small parts of your existing system in the new
language, and use that to figure out where the pain-points are. For
example, it might be that the (ephemeral) JSON data your send around
between systems doesn't lend itself to the structure of the Go JSON
decoding package. This suggests patching the existing system now, paving
the way for adoption of a new system later. Another example is to build a
system within the system: your new page-table algorithm for an operating
system can be simulated in a user process. If you have a feed of requests,
you can replay those against your experimental kernel.

Once you have made your experiments, you have cut down on the adoption time
considerably, which makes it far easier to do a true solution in 72 hours.
The other key factor is to understand the problem space such that
succinctness becomes a major driving factor. Once you have the right
structure of a program, the simplicity and elegance of a solution tend to
make the software write itself, due to there being few moving parts. In Go,
this usually means figuring out the right interfaces which makes you able
to reassemble the code in many different ways easily. In something like
Haskell, the game is to figure out what algebraic[0] properties the system
has so you can part it into building blocks which can then be recombined
into the solution (this is why Haskell people like to use special
identifying terms for different kinds of building-block glue). Something
like OCaml supports "functions at the module level:" a module takes another
module as input and produces a module as output. The right solution often
exploits such properties in order to achieve the same solution with less
code. And simple solutions tend to outperform complex solutions in the end
anyway.

Failing projects come in many forms.

A slow Python programs machinery is creaking under the normal daily load
and more machines needs to be added in order to handle the pressure. A Java
programs heap is growing out of bounds. Garbage Collection is
stopping-the-world and hurts response times. A C++ program is dying because
the junior programmer added some code which broke some undescribed
assumptions about the memory state.

A system has seen a steady influx of new users, some of which use it in new
ways not thought about when it was first written. A system has broken down
because its algorithmic structure could handle the existing load, but not
the new one.

A project can be so complex from the start, that it is doomed. But a
project can also doom itself from the perception of programmers that "it
cannot be done".

The key is to get into the limbo space between success and failure and then
use your-poison-of-choice to move a project from limbo into success. When
doing so, don't pick any project, but pick one that suits the language you
wish to adopt. Go's strength is its relative sympathy to the underlying
hardware: overhead of using the language is low which in turn means
programs are compiled close to what the machine can efficiently execute. In
addition, it is one of the few languages with a built-in concurrency story,
which comes very handy on the modern server side. Finally, low-latency
operation has been a goal for some time.

A good indicator of this kind of thinking is to look 7 years back, to the
genesis of the Node.js project. There is nothing remarkable about Node.js,
except that it combined a number of useful technologies into a stack: V8,
an event loop, and an I/O engine. But adoption were driven by its ability
to supplant existing server side solutions. The killer productivity feature
being that you could replace your old clunky server backend with a new lean
backend written in Javascript. In other words, you have server solutions
which are failing and in comes the new shiny Node.js system and proves it
can run on a fraction of the hardware of the old solution. 7 years ago,
there were relatively little focus in the community-at-large on the latency
of responses, so the major problem space for Node.js was ignored. This is
now becoming a limbo in which languages such as Go, Erlang, and Elixir can
operate due to their focus on low-latency operation.

My own experience is from the Erlang world, but it applies equally well in
the Go world: an existing system was not meeting reliability needs,
crashing weekly and requiring intervention by 3 people in the organization.
By utilizing an Erlang strength, reliable operation through coordinated
system restarts from known invariant states, we could quickly replace the
failing component with one that has had 1 bug in 1 year. A bug which has
not been fixed yet because it is so rare that the effort required to fix it
outweighs its occurrence. Through automatic restarts of subsystems, the bug
does not incur a fatality in the normal operating procedures.

Another situation was a project with faulty state-handling. This meant that
the system gradually introduced errors in its persistent state, which meant
that it required manual cleanup in order for the system to do the right
job. By replacing the system with an idempotent solution, and by solving
some distribution problems in the old solution, we could quickly replace
the system with one that worked. Once replaced, we used the new system to
add instrumentation around its core, which meant we suddenly had metrics of
neighboring systems. This uncovered several bugs in the surrounding
infrastructure, which were then fixed. The end result was a more stable
system-whole.


[0] Algebra is here used in the arabic form of "al-jabr" (الجبر) which
supposedly means "reunion of broken parts'

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to