On 08/04/2016 05:09 AM, J. Roeleveld wrote:
On Tuesday, August 02, 2016 12:16:32 AM james wrote:
On 08/01/2016 11:49 AM, J. Roeleveld wrote:
On Monday, August 01, 2016 08:43:49 AM james wrote:
<snipped>
Way back, when the earth was cooling and we all had dinosaurs for pets,
some of us hacked on AT&T "3B2" unix systems. They were know for their
'roll back and recovery', triplicated (or more) transaction processes
and 'voters' system to ferret out if a transaction was complete and
correct. There was no ACID, the current 'gold standard' if you believe
what Douglas and other write about concerning databases.
<snip>
Comparing results of codes run on 3 different processors or separate
machines for agreement withing tolerances, is quite different. The very
essence of using voting where there a result less that 1.0 (that is
n-1/n or n-x/n was requisite on identical (replicated) processes all
returning the same result ( expecting either a 0 or 1) returned. Results
being logical or within rounding error of acceptance. Surely we need not
split hairs. I was merely pointing out that the basis telecom systems
formed the early and of widespread transaction processing industries and
is the grand daddy of the ACID model/norms/constructs of modern
transaction processing.
Hmm... I am having difficulty following how ACID and ensuring results are
correct by double or triple checking are related.
Atomicity; Consistency; Isolation, Durability == ACID (so we are all on
the same page).
Not my thesis. My thesis, inspired by these threads, is that all of
these (4) properties of ACID, originated in the telephone networks, as
separate issues. When telephonic switching moved from electro-mechanical
systems to computers, each of these properties where develop by the
telephonic software and equipment providers. Banks followed the
switching systems and these (4) ACID properties were realized to be
universally useful and instituted and rebranded as 'transactions'
Database systems, developed by IBM and other quickly realized the value
of ACID properties in all sorts of forms of data movement and
modification (ie the transaction). Database developers and vendors
did not invent ACID properties. Indeed and in fact those properties were
first used collectively in the legacy telephonic systems, best
desribed by SS(7). Earlier version are a case study in redundancy and
reliability of those early telecom systems. Granted latency was a big
problem, that moving from electric circuits to digital circuits was
fixed; yet still there was the five-nines of quality (99.999%) wonderful.
For massively parallel needs,
distributed processing rules, but it is not trivial
Agreed.
<snip>
Another point, there are single big GPUs that can be run as thousands of
different processors on either FPGA or GPU, granted using SIMD/MIMD
style processors and thing like 'systolic algorithms' but that sort of
this is out of scope here. (Vulcan might change that, in an open source
kind of way, maybe). Furthermore, GPU resources combined with DDR-5 can
blur the line and may actually be more cost effective for many forms of
transaction processing, but clusters, in their current forms are very
much general purpose machines.
I don't really agree here. For most software, having a really fast CPU helps.
Having a lot of mediocre CPUs means the vast majority isn't doing anything
useful.
Software running on clusters needs to be written with massive parallel
processing in mind. Most developers don't understand this part.
Where did you get the idea that folks builing clusters, are not as
interested in using the fastest processors possible; dude, that's just
failed (non-sequitur)logic.
Well this premise of yours is a corollary to my thesis; and the early
telecom systems developers were historically 'bad ass' and highly
intelligent. It has taken the software development world decades to
catch up to key systems attributes of hardware design (redundancy and
roll-back and recovery). Now that things are digital, you can run codes
on a variety of different hardware to abstract the properties of ACID
and supercede ACID, with yet more properties of robust hardware design.
(Sadly, even most EE professors are severly lacking in this knowledge).
Modern EE experts have most of their magic attributed to European
Mathmeticians, but that's another issue, too complex for the average
java* coder. Curiously, you can read all about, Hilbert, should you need
to scratch that itch....
My point:: Douglas is dead wrong about ACID being dominated by Databases,
for technical reasons, particularly for advanced teams of experts.
Wikipedia actually disagrees with you:
https://en.wikipedia.org/wiki/ACID
"In computer science, ACID (Atomicity, Consistency, Isolation, Durability) is
a set of properties of database transactions."
Exactly. Database vendors got the ideas and components (literals and
abstractions
from the telephonics industries to get a leg up on moving electronic
switching (which already had those key components now referred to as
ACID) in hardware. When those electro-mechanical systems move to digital
circuits, Bell labs ensure those properties where a closely hend secret
wrapped up in the 'unix OS' They did promote ACID in their software and
the banks were the other customers were likewise saying YES YES YES, we
want telcom ACID level of performance in our (developing) computer
software too. But the migration to digital let the 'cat out of the bag'
on the wonders of ACID (long before Timothy Leary, just so the
Californians among us can keep up!).
In other words, it's related to databases
They (vendors) copied it from telecom, and wildly promoted it, very
successfully. Combine this with the fact that most US EE programs are
abysmally weak (always have been), so now we indeed and in fact have
this severe lapse in robust and fault tolerant systems.
WHY? Nothing (industrial or commercial) had the "Five-nines" of
reliability, but those electro-mechanical telephonic systems.
*nothing* Everybody wanted it; hence those (4) components were harvested
from telephonics and used as a model for all transactions.
Take "atomicity" for example. It has it's roots in "call setup".
Dialogic is a pc board vendor (from decades ago) that followed those
early systems. Here is a document (from the 70s/80s/?) were they
have "40 Atomic Functions" that they use in software to control the
hardware for 'call setup and management'. Sure many more documents
exist, but they may not be publically available in electronic forms.
All of this occurred before those folks that write for Wikipedia were
ever born, so they could not possible be aware of these issues and
historical precedence.
[1]
https://www.dialogic.com/webhelp/MSP1010/10.4.0/WebHelp/ppl_dg/l3p_cic.htm
One can research each of those four properties and discover how telecom
integrated them into the phone system of North America (Europe almost
evolved simultaneously). Bell Labs is " the data of ACID"; and it was a
tightly held secret as long as possible, to delay the expansion of usage
and eventual break up of that legacy monopoly.
There are many things in the (legacy) communications world that have not
accurately made it's way to digital in a form freely available on the
internet. (like signal intercept). Think of all of those hidden
antennae arrays in the UK when microwave telecom was all the rage.
MCI was a key player on exploiting microwave (another tenant of EE).
Surely most MBA, HR and Finance types of
idiots running these new startups would know know a coder from an
architect, and that is very sad, because a good consultant could have
probably designed several robust systems in a week or two. Grant few
consultants has that sort of unbiased integrity, because we all have
bills to pay and much is getting outsourced... Integrity has always been
the rarest of qualities, particularly with humanoids......
The software Uber uses for their business had to be developed in-house as
there, at least at the time, was nothing available they could use ready-made.
This usually means, they start with something simple they can get running
quickly. If they want to fully design the whole system first, they would never
get anything done.
Where these projects usually go wrong is that they wait too long with a good
robust design, leading to a near impossibility of actually fixing all the, in
hindsight obvious, design mistakes.
(NOTE: In hindsight, as most of the actual requirements would not be clear on
day 1)
I could not agree with you more.
The more processors, readily available to codes that know how to use
them, in parallel the faster and better and more reliable the systems
developed (including the software) will be. Some are working on extremly
low latency systems where FPGAs are embedded in general purpose
processors (Intel is leading on this). The DoD has been using these
systems for decades. Clusters are superior to single (or multicore)
systems if these kids knew anything about redundancy and fault
tolerance; both which originate in hardware and the telecom industries
perfected to the 99.999% robustness level (while IBM drulled on their
punch-cards. I know, I was there......
And in my opinion,that was the most important of the collective of
reasons why AT&T, it's 10,000+ lawyers and assholes in our government
fought so hard to keep early unix expansion out of the hands of the
masses. At one point it was easier to get a top-secret clearance than it
was to code on those early telecom systems.
and the switch was was configured, then the code would
essentially 'vote' and majority ruled. This is what led to phone calls
(switched phone calls) having variable delays, often in the order of
seconds, mis-connections and other problems we all encountered during
periods of excessive demand.
Not sure if that was the cause in the past, but these days it can also
still take a few seconds before the other end rings. This is due to the
phone-system (all PBXs in the path) needing to setup the routing between
both end-points prior to the ring-tone actually starting.
When the system is busy, these lookups will take time and can even
time-out. (Try wishing everyone you know a happy new year using a wired
phone and you'll see what I mean. Mobile phones have a seperate problem
at that time)
I did not intend to argue about the minutia of how a particular Baby
Bell implemented their SS7 switching systems on unix systems. My point
was the 'transaction processing' grew out the early telephone network,
the way I remember it:: ymmv. Banks did dual entry accounting by hand
and had clerks manually load data sets, then double entry accounting
became automated and ACID style transaction processing added later. So
what sql folks refer to as ACID properties, comes from the North
American switching heritage and eventually the worlds telecom networks,
eons ago.
There is a similarity, but where ACID is a way of guaranteeing data integrity,
a phone-switch does not need this. It simply needs to do the routing
correctly.
Have you every talked to an old military officer that worked in
Intelligence? Like the spy plan incidence over Afganistan, circa 1960
[2]? https://en.wikipedia.org/wiki/1960_U-2_incidentf
Data integrity almost caused WW2.
WRONG. The fives-nines was so coveted by everyone else that there was a
feeding frezy on just how these folks at bell labs pulled it off. Early
(1950-1970s) computational systems were abysmal to own or operate and
yet the sorry ass phone company had 99.999% perfection (thanks to bell
labs)? They provided the T1 and T3 lines in/out of the pentagon.
Jealousy was outrageous. Database vendors where struggling with
assembler and 'board changeouts' as Rich alluded to.
<snip>
ACID is about data integrity. The "best 2 out of 3" voting was, in my
opinion, a work-around for unreliable hardware.
Correct. voting was used as the precursor technology to distributed
systems (today it's the cluster), It added to the reliablity and
robustness. It provided consistency. It demonstrated that the entire
string of what was need for ss7, including call setup, could be
replicated and run on a cluster (oops another hardware set)....
Absolute true. But the fact that a High Reliability in computer
processing (including the billing) could be replicated performed
elsewhere and then 'recombined', proves that the need of any ACID
function can be split up and ran on clusters and achieve ACID standards
or even better. So my point, is that the cluster, if used wisely,
will beat the 'dog shit' out of any Oracle fancy-pants database
maneuvers. Evidence:: Snoracle is now snapping up billion dollar
companies in the cluster space, cause their days of extortion are
winding down rather rapidly, imho.
I disagree here. For some workloads, clusters are really great. But SQL
databases will remain.
As a subset of distributed processing. Oracle (the champion of
databases) is going to atrophy and slip into irrelevance, once kids
learn how to supersede ACID with judicious cluster hardware and codes on
top of heterogeneous clusters..... Granted any corp with billions and
billions and deep (illegal?) relationships with government officals will
eventually prosper again....
Once again, EE will light the forward path.
Also, just because the kids are writing the codes, have not figured all
of this out, does not mean that SQL and any abstraction is better that
parallel processing. No way in hell. Cheaper and quicker to set up,
surely true, but never superior to a well design properly coded
distributed solution. That's my point.
Workloads where you can split the whole processing into small chunks where the
same steps can be performed over a random sized chunk and merging at a later
stage will lead to correct results. Then yes.
True, but it's not quite as restrictive as you think. Large system,
with even just a small bit of parallism integrated into the overall
architecture, benefit. Howmuch depends on the designers. We do need
more EE coders leading on cluster designs, but the Universities (world
wide) have let everyone down.
However, I deal with processes and reports where the amount of possible chunks
is definitely limited and any theoretical benefit of splitting it over multiple
nodes will be lost when having to build a very fancy and complex algorithm to
merge all the seperate results back together.
NoSQL is an abysmal failure. SQL need to be a small subset of robust
parallel systems design and implementation. The latest venue is
'unikernels'.
Cluster will dominate because deep pockets can have the latest and
fastest and cheapest hardware, in massive quantities before the
commoners even learn how it works. Arm64V8 is a prime example and
current example. It's heat loading per unit of processing, blows away
Cisc based systems. FPGA can implement any processor or memory structure
and can it in microseconds. But these are areas where attornies via the
patent system, abuse light-weight competition.
This algorithm then also needs to be extensively tested analysed and
understood by future developers. The additional cost involved will be
prohibitive.
Don't we need more jobs? Are you kidding me? That's way large
corporations are so vehemently aggressive in these spaces. We have all
kinds of 'stem graduates' here in the US that cannot get a stem job.
(hence trumps appeal to the middle class:: tarrifs and promote
competition at home).
I disagree, UBER is still using a relational database as the storage layer
with something custom put over it to make it simpler for the developers.
Any abstraction layer will have a negative performance impact.
Wanna bet that UBER and like minded companies change again and again and
again, until they start study of what mathematicians and EE have been
doing for a very long time.
It is based on a clever idea, but when
2 computers having the same data and logic come up with 2 different
answers, I wouldn't trust either of them.
This is rare occurance in digital systems. However, when you look at
other forms of computational mathematics, tolerances have to be used
to get consistency (oops another property of acid showing up in legacy
literature).
I could not care less about UBER's problems, unless they send some funds
my way. BUT, I am willing to share knowledge, so they 'wise up' because
fundamentally, I love disruption in the status quo.
Yep, That the QA of Transactions is rejected and must be resubmitted,
modified or any number of remedies, is quite common in many forms of
software. Voting does not correct errors, except maybe a fractional
rounding up to 1(pass) or down to zero (failure). It does help to
achieve the ACI of ACID
It's one way of doing it. But it can also cause extra delays due to having to
wait for seperate nodes to finish and then to check if they all agree.
Once clusters are prototyped on Cisc systems, Those codes will be
rapidly moving to DSPs, GPUs and FPGA and DDR5+. Those with deep pockets
will 'smoke' the competition and idiots like Verizon
will be trying to make more stupid acquisitions. Folks do know that
Verizon sold off billions in data centers, close to fiber highway
to by Yahoo, right? (It "pays out" because they are actually dumping
hundreds of thousands of legacy employees (trump voters); that's what
that transaction is all about. They are still doom to fail, because the
software idiots advising Verizon, have no clue about the fundamentals
and mathematics of Communications. (very sad state of affair for Verizon).
Since billions and billions of these (complex) transactions are
occurring, it is usually just repeated. If it keeps failing then
engineers/coders take a deeper look. Rare statistical anomalies are
auto-scrutinized (that would be replications and voting) and the pushed
to a logical zero or logical one.
The complexity comes from having to mould the algorithm into that structure.
And additional complexity also makes it more fault-likely.
Only during development and beta tests. After a while it will become
'rock solid' and pushed down into the lowest levels of hardware, so it
is hidden from the average coder. Here is a billionare, who is quite
stealthy, that has done this exact thing most recently.
[3] https://www.deshawresearch.com/
[4]
https://www.quora.com/unanswered/Computer-Architecture-How-its-like-working-for-DESHAW-RESEARCH-as-an-ASIC-designer-architect
<snip>
A lot can be described using 'modern' designs. However, the fact remains that
ACID was worked out for databases and not for phone systems. Any sane system
will have some form of consistency checks, but the extent where this is done
for a data storage layer, like a database, will be different to the extent
where this is done for a switching layer, like a router or phone switch.
Please reread my previous posts. You, or anyone can do the individual
(and robust) research on the ACID components and the history of telecom.
Wikipedia and many other sites have failed you here; sorry.
<snip>
Those incompetencies are usually in the domain of finances and services
provided. The basic service of a telecoms company is pretty simple: "Pass
data/voice between A and B".
There are plenty of proven systems available that can do this. The mistakes
are usually of the kind: The system that we bought does not handle the load
the salesperson promised.
ON the surface, you are absolutely correct. Mass education is severly
thrwated by the entire patent system, grotesque lawyers and legal
semantics and the 'bought and sold politicians' from around the globe.
(the same folks that brought us globalism). So folks are merely
"uneducated" in these matters. Yes these globalists continue to consipre
against commoners, around the globe. Education and sharing of hardware
and software and mathematics and physics will set the captives free
(eventually). This is the essence of WW3 imho.
The fact that the masses and even most coders are blissfully unaware of
where ACID came from, is a testament to the failure of globalism that
provides the protection to the billionaire class of manipulators, imho.
With a small number, it might actually still scale, but when you pass a
magic number (no clue what this would be), the counting time starts to
exceed any time you might have gained by adding more voters.
Nope the larger the number, the more expensive. The number of voters
rarely goes above 5, but it could for some sorts of physics problems
(think quantum mechanics and logic not bound to [0 1] whole numbers.
Often logic circuits (constructs for programmers, have "dont care"
states that can be handled in a variety of ways (filters, transforms,
counters etc etc).
"don't care" values should always be ignored. Never actually used. (Except for
randomizer functionality)
Dude, you need to find some Rf/analog folks and learn about what's going
on around "noise" in systems. Once thought to be useless, or a
hindrance, it is a fertile ground for innovation, again that the masses
are blissfully unaware of. Much is termed "classified" just so you know.
Also, this, to me, seems to counteract the whole reason for using
clusters:
Have different nodes handle a different part of the problem.
That also occurs. But my point is properly design code for the cluster
can replace ACID functions, offered by Oracle and other over priced
solutions, on standard cluster hardware.
All commonly used relational databases have ACID functionality as long as they
support transactions. There is no need to only choose a commercial version for
that.
Like the Chinese, they are brilliant copy cats:: nothing wrong with that
(see my take on 100% absolution of all patents, globally.
The problem with todays
clusters is the vendors that employ the kid-coders, are making things
far more complicated that necessary, so the average linux hacker just
outsources via the cloud. DUMB, insecure and not a wise choice for many
industries.
Moving your entire business into the cloud often is.
I could not agree more. HYBRID systems, where the chief
architect/designer works exclusively for the custer, is where the future
will shake out. All of this idiocy on the masses on the web:: who cares
where it is processed. The closer to the node-idiot-user-consumer, the
better, mathematically.
And sooner or later folks are going to get wise can build
their own clusters that just solve the problems they have. Surely hybrid
clusters will domiant where the owner of the codes does outsource peak
loads and mundance collects of ordinary (non-critical) data.
Eg. hybrid solutions...
Yes yes and HELL YES! In fact gentoo stands out for the quintessential
'unikernel' for distributed processing!
Vendors know this and have started another 'smoke and mirrors' campaign called
(brace yourself) 'Unikernels'.....
"unikernels" is something a small group came up with... I see no practical
benefit for that approach.
A minimize gentoo system and an optimize and severly stripped linux
kernel is pretty much a unikernel. Docker, the leader in
commercialization of containers, knows this and has subsummed Alpine
linux. Patients my friend, it will become very clear over time, but not
exactly the way the current vendors are portraying unikernels.
Problem with that approach is they
should just be using minized (focused) gentoo on striped and optimize
linux kernels; but that is another lost art from the linux collection
I see "unikernels" as basically, running the applications directly on top of a
hypervisor. I fail to see how this makes more sense than starting an
application directly on top of an OS. The whole reason we have an OS is to
avoid having to reinvent the wheel (networking, storage, memory handling,....)
for every single program.
(see above response). For the last few years, I have run into an
astounding number of brilliant folks that have mastered and use gentoo
on a daily basis. The more I learn about clusters, the more I realize
why this massive of gentoo folks are so silent on these matters.
Strategic business plans, brah. Gentoo is the worlds best kept secret.
Clusters of multiple compute-nodes is a quick and "simple" way of
increasing the amount of computational cores to throw at problems that
can be broken down in a lot of individual steps with minimal
inter-dependencies.
And surpass the ACID features of either postgresql or Oracle, and spend
less money (maybe not with you and postgresql on their team)!
Large clusters are useful when doing Hadoop ("big data") style things (I
mostly work with financial systems and the corresponding data).
Storing the entire datawarehouse inside a cluster doesn't work with all the
additional requirements. Reports still need to be displayed quickly and a
decently configured database is usually more beneficial. Where systems like
Exadata really help here is by integrating the underlying storage (SAN) with
the actual database servers and doing most of the processing in-memory.
Eg. it works like a dedicated and custom build cluster environment specifically
for a relational database.
There is a revolution in hardware memory technologies. In a few more
years massive ram will be an integral part of of the computational
hardware (think DDR5 and GPUs currently. Most massive systems can be
split up into small systems too. Databse vendors have little incentive
to do this for customers. The art of the design and implementation of
'transaction processing' need to return to hardware concepts during this
transition.
I say "simple" because I think designing a 1,000 core chip is more
difficult than building a 1,000-node cluster using single-core, single
cpu boxes.
Today, you are correct. Tomorrow you will be wrong.
In that case, clusters will be obsolete tomorrow.
No, the chips and the cluster will be one in the same. Real time
sequence stepping in problem->solution domains for things like flight
simulation and subsurface fluid management are still grand challenges
that are a ways off. The average database solution, even for large
commercial/global operations, is going to migrate to clusters. Clusters
and storage will continue to migrate to silicon. The biggest problem is
the patent system and artificial constructs more commonly known in the
business world as "cost barrier to entry" economics. These mostly result
from the way the local/state/federal/global laws are implemented and
enforced.
[1]. Besides once
that chip or VHDL code or whatever is designed, it can be replicated and
resused endlessly. Think ASIC designers, folks to take a fpga project to
completing, An EE can codes on large arrays of DSPs, or a GPU
(think Khronos group) using Vulcan.
I would still consider the cluster to be a single "machine".
Thats the goal.
That, in my opinion, that goal has already been achieved. Unless you want ALL
machines to be part of the same cluster and all machines being able to push
work to the entire cluster...
In that case, good luck in achieving this as you then also need to handle
"randomly dissapearing nodes"
I think Brexit and Trump will replace globalism with localism and
tariffs. Goverments will fight over the spoils of tariffs to finance
their glutony, and locals will figure out how to build and operate
everything, locally. So you are correct. I actually am promoting hybrids
clusters, so the commoners can ;'suck the brain-marrow' out of
walstreet, politicans and the globalists. Once groups of locals learn to
be self sufficient, think of them and digital omish, the only function
governemnts and globallist provide is national security. Folks that like
work can join up and kills folks from other like minded collectives.
Most will be extraordinarily happy to provide 100% of what they need,
locally. There will be some exchange of material and those less
innovative will lag a bit, but that is what globalist should concentrate
on:: how to teach those less fortunate how to become self sufficient,
locally.
And 90+% of developers still don't understand how to properly code for multi-
threading. Just look at how most applications work on your desktop. They all
tend to max out a single core and the other x-1 cores tend to idle...
Wonder why Bill Gates (in his tax-dogging world charities) is not
teaching this stuff? Rupert Murdock? Rich Arabs? Chineese?
The elites of the world are 'selfish bastards' and use the good work
that come from their ranks to further screw up localism (self
sufficiency on a local basis). Sooner or later these globalist will have
to answer to the masses of local citizens, wherever they are hiding. We
have seen the purging of the Republican party. The Democratic Elites are
currently undergoing a purging. After Brexit, it
will rapidly expand in Europe. Saudis are running scared. Pandemic
of locals that want to be self sufficient. Folks are tire of listing to
some (asshole) expert that does not live down the street from them.
Globalism flies in the face of common-sense, and computational
competence is not except. There is latency and much deceptions in the
work of computations, but that too will fall (eventually).
Granted many old decreped codes had to be
redesigned and coded anew with threads and other modern constructs to
take advantage of newer processing platforms.
Intel came with Hyperthreading back in 2005 (or even before). We are now in
2016 and the majority of code is still single-threaded.
The problem is, the algorithms that are being used need to be converted to
parallel methods.
Sure the same is true with
distributed, but it's far closer than ever. The largest problem with
cluster, is Vendors with agendas, are making things more complicated
than necessary and completely ignoring many fundamental issues, like
kernel stripping and optimizations under the bloated OS they are using.
I still want a graphical desktop with full multi media support. I still want
to easily plugin a USB device or SD-card and use it immediately,.....
That requirement is incompatible with stripping the OS.
Agreed. And I want to build the hardware on my own 3D printer. I am
flexible to try out many offerings when 3D printing looses those patents
on using metals and semiconductor materials......
This too will come, hopefully sooner than later and without the shedding
of blood....
I do
not have the old articles handy but, I'm sure that many/most of those
types of inherent processes can be formulated in the algebraic domain,
normalized and used to solve decisions often where other forms of
advanced logic failed (not that I'm taking a cheap shot at modern
programming languages) (wink wink nudge nudge); or at least that's how
we did it.... as young whipper_snappers bask in the day...
If you know what you are doing, the language is just a tool. Sometimes a
hammer is sufficient, other times one might need to use a screwdriver.
--an_old_farts_logic
Thinking back on how long I've been playing with computers, I wonder how
long it will be until I am in the "old fart" category?
Stay young! I run full court hoops all the time with young college
punks; it's one of my greatest joys in life, run with the young
stallions, hacking, pushing, shoving, slicing and taunting other
athletes. Old farts clubs is not something to be proud of, I just like
to share too much......
Hehe.... One is only as old as he/she feels.
--
Joost
Young kids often show amazing wisdom. The educational processes beat
this out of kids. Isolation and localism (aka home schooling) does allow
kids to explode on both technical competence and creativity.
But this flies in the face of the goals of globalism. When I was young,
there was a kid that was brilliant and 100% home schooled by mostly
uneducated parents. They lived in the bush of Alaska, hundreds of miles
from anyone. Brilliance and innovation are the providence of the youth;
just look at all of those young, brilliant minds from post-mid-evil
Europe. Mass education just beat those traits right out of all children.
Communications and localism will yeild many, many brilliant folks and
that is the greatest fear of the globalist, who want to remain in power
and have dominion over the masses. It's the classic struggle. The path
to a better future is espoused in parallel and distributed and local
decision/control, from politics to hardware to software.
hth,
James