2014/10/09 10:58 "lee" <l...@yagibdah.de>:
>
> Joel Rees <joel.r...@gmail.com> writes:
>
> >> 2014/09/25 9:15 "lee" <l...@yun.yagibdah.de>:
> >>
> >>> Joel Rees <joel.r...@gmail.com> writes:
> >>
> >>
> >> Hmm.  So linkage is a result of complexity,
> >
> > What is complexity?
> >
> > Complexity is not a simple topic. :-\

Indeed. And one of the problems with computers is that people want to
believe that computers can make complexities go away.

Some complexities you can encapsulate or hide, or expose in an
organized manner so that that are easier to deal with. Others, no.

> >> and implicity is a result of
> >> undeclaredness (or unawareness of declaredness).
> >
> > Sort of, but not quite.
> >
> > I would rather say,
> >
> >     Implicitness is the lack of explicit declaration at the point
> > where the linkage is expressed (or occurs).
> >
> > but I'm not sure that would be universally well understood, either.
>
> So implicitness isn't a result of something but a lack of explicitness.

Generally, the things which are implicit are the things which are not
said, but assumed to be understood: unspoken assumptions.

Logical implication is a different thing, the process of deriving
something from assumptions which have to be explicit. The base word
"imply" can cause yet another kind of confusion.

> Too much explicitness isn't good, either, because it'll get into your
> way.

Yeah, if you take the time to explicitly specify every parameter,
you're going to have a hard time getting started coding. And
specifying too many parameters can really slow an implementation down.

> You could as well argue that linkage is basically a bad thing and
> therefore should only be accepted for instances where it has significant
> advantages which outweigh the disadvantages.  At least we have a
> tautology here.

Oh! The problem of evil rears its head in mathematics. ;-/ (sorry.)

But the hidden assumption that linkages can be completely done away
with is where the logic goes wrong. Remove all externally accessible
parameters and you can't even write the algorithm, much less implement
it.

> > Generally, reducing complexity and reducing linkage are related, but
> > not necessarily. The degree to which linkage is implicit, or to which
> > entanglement is hidden, is not necessarily dependent on the degree of
> > either complexity or linkage. These can be independent variables,
> > depending on the case in question. In some cases, you can even make
> > them indpendent variables, when they didn't start out that way in your
> > analysis.
>
> Hm, true ... Less linkage is easier to hide than more linkage.  It makes
> me think of a simple perl script.  Such a script probably has an
> unbelievable amount of implicit linkage. For example:
>
> perl -e 'print scalar localtime, "\n";'

Well, that indeed illustrates a lot about complexity, and about hiding
it, along with the hidden parameters that can turn into implicit
linkage.

(I'd like to say more about perl, but I don't have time.)

> >> Since you cannot make things less complex,
> >
> > I'm not sure what you're trying to say.
> >
> > If you know you can make things more complex, you know that there must
> > be things that can be made less complex.
>
> The less complicated things tend to be deprecated and to become
> obsolete.

Well, the sales crew definitely wants you to believe it.

> 25 years ago, computers didn't have sound cards.  You could possibly add
> one, making your computer more complicated both in terms of total
> complexity of hardware and software.  Nowadays, a replacement for a
> sound card is built in.  Making things less complicated would mean to
> omit or to remove the sound cards and their replacements.  Who wants to
> do that?

On the one hand, sometimes you do remove most of the sound software,
leaving just enough of the drivers to keep the sound card in a safely
powered-down state.

On the other hand, with sound-on-the-motherboard, many old sound card
modes are unsupported. The overall number of externally accessible
parameters, and the complexity of interaction of what remains is
decidedly less  that what all but the cheapest sound cards used to
supply.

Also, with all the stuff that is on the motherboard, you can often get
rid of much of the circuitry that would otherwise drive the external
busses, and simplify much of the driver software.

You really can't say that progress is linear in the direction of
increasing complexity.

> > There are several kinds of complexity.
> >
> > One is purely subjective -- perceived complexity: "It's different, so
> > it's complicated." or "I don't understand it, so it's complicated." We
> > can talk about the parsing of a problem by the human brain, but it
> > wouldn't help yet. We should set perceptions of complexity aside here.
> >
> > If you have a device with 100 inputs and 100 outputs, that's going to
> > look complicated, right? But if all the inputs just feed directly to
> > the outputs, it's not really all that complicated after all. This is
> > one kind of complexity. Analysis is straightforward.
> >
> > If some of the inputs are inverted or amplified, that's a little more
> > complicated, but it's the same kind of complexity. Also, if some of
> > the inputs combine with others, so that some outputs are a function of
> > multiple inputs, this is a bit more complicated, but it's still the
> > same kind of complexity.
> >
> > If some outputs feed back into their own inputs, this changes the kind
> > of complexity. Small circuits aren't too bad, but if you have even 10
> > inputs and 10 outputs, and you have some outputs that are a function
> > both of themselves and their own inputs, analysis gets difficult in a
> > hurry. If all ten are fed-back and mixed with other inputs, you have a
> > circuit that is more complex (and more complicated) than the simple
> > one with a hundred inputs and outputs that don't feed back.
>
> How is this a different kind of complexity?  It may be more complexity,
> yet it's still the same kind.

Some mathematicians sort of agree with you. Sort-of. They say the
complexity is not substantially different until you use something more
than simple feedback. But let me unpack that a bit:

When you have feed-back you have to know what the previous output
state was before you can start analyzing the current output state.

Consider an inverting binary buffer, without feedback. There is one
input and one output, and, given the input, you know immediately what
the output is. 1 gives 0, and 0 gives 1: two states for each input,
two states for each output.

A cleaner case, but perhaps too obvious to get a good look at without
thinking carefully, is a straight wire, or a non-inverting buffer --
you get out what you put in, and if you are working in binary, you
have one input with two possible states and one output with two
possible states.

So if you have two wires, working in binary input/output, you have two
states each, so that's a total of four input states and four output
states for the two wires. 3 wires, 3^2 or 9 states on input and
output. States increase by the square of the number of inputs and the
square of the number of outputs, but each output is only dependent on
its input.

As long as there is no feedback, you can combine inputs or
"un-combine" outputs (multiplex and demultiplex), and the states are
on the order of n^2. Fairly straightforward for analysis.

Now consider a toggle flip-flop, where you are only looking at the
toggle input and only at the positive logic output. You have to
consider both the previous state and the input state. Since the input
is a transition, rather than a state, it may not be clear what's
happening, but if you consider that lining four flip-flops in sequence
and taking the output of all the flip-flops as the output of the
machine, you can get 16 output states with only one input line. The
input can be considered as active transition vs. everything else, or,
if you need to look at spikes and such, steady state(s), inactive
transition, and active transition.

Such a sequence of flip-flops makes a simple counter with certain
timing issues that may make it inappropriate for many circuits. And
you can count in bases other than powers of two by further feedback.
These kinds of counters are still fairly straightforward, but feedback
causes the complexity to increase much more quickly than without
feedback.

The number of output states increases by the square of counting
stages, not by the square of inputs. And if you have multiple inputs,
then the exponent ends up greater than 2.

Now, feedback is what allows memory circuits to be constructed.

Add memory circuits to your device, so that you can remember what the
circuit's state was more than one timing interval previously, and the
level of complexity suddenly becomes significantly greater. Can you
see why?

> > If you can take the device with feedback and split it into five
> > separate devices, where there is no interconnection between the five
> > devices, even if there is feedback within individual devices, the
> > resulting collection of devices is generally much easier to analyze
> > than the single device with ten inputs and ten outputs with feedback.
> > And much easier to design and build correctly.
>
> It won't be able to do the same things as the device with 10 inputs and
> 10 outputs because you have removed the interconnections.

If you've removed functional interconnections, you have mishandled it.
But, then, I said at the start, no interconnection between the five
devices, so, there were no functional interconnections to be removed.

> Each device
> is on its own, with the same kind of complexity.  Only each device is
> less complex, and the whole thing is less complex because you removed
> all the linkage.

Let me try that again. Ten inputs and ten outputs in the overall
device. Within the overall device, there are five separate devices.

Why would you group independent devices? Well, think of a bicycle. The
pedals are more-or-less independent of the handlebars and the
mud-guards, aren't they? But the bicycle is still one device. Assuming
the bicycle is moving forward, if you have to know where the pedals
are to be able to tell what happens when you turn the handlebars to
the left, there's something wrong with the design of the bicycle..

The original construction may not have separated the sub-devices well,
but functional analysis of the inputs and outputs has shown that the
functions are independent of each other, and any interconnections that
existed had no real effect on the overall operation.

In coding, we call this kind of thing refactoring.

> > Programs and program functions are similar, the inputs being input
> > parameters, and the outputs being the result, including side-effects.
>
> I guess there can be side effects through implicit linkage and others.

Well, yeah, implicit linkage does tend to induce unexpected side-effects.

> >> If it is, we can't seriously object systemd, can we?
> >
> > Well, that's what the pro-systemd folks seem to be saying, isn't it?
>
> They don't realise that they employ too much linkage.

Four years ago, I wanted to think so. The record of the project leads
me to consider other possibilities, as well.

> That it still works doesn't mean anything.  I'm finding this
> point particularly interesting.  It's easily overlooked ...

You mean, that it _appears_ to work, I think.

Thanks for giving me an opportunity to say this on-list, and I hope
you won't mind if I borrow some structure from this when I re-write my
explanation of complexity on one of my blogs.

Joel Rees

Computer storage is just fancy paper, the CPUs just fancy pens.
All is a stream of text flowing from the past into the future until
the end of time.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAAr43iPvPG+3pSoEPcdzDS40tS=vQZR1xBdY_Kv5caT6PH=m...@mail.gmail.com

Reply via email to