On 08/01/2016 01:03 PM, Rich Freeman wrote:
On Mon, Aug 1, 2016 at 12:49 PM, J. Roeleveld <jo...@antarean.org> wrote:
On Monday, August 01, 2016 08:43:49 AM james wrote:
Sure this part is only related to
transaction processing as there was much more to the "five 9s" legacy,
but imho, that is the heart of what was the precursor to ACID property's
now so greatly espoused in SQL codes that Douglas refers to.
Do folks concur or disagree at this point?
ACID is about data integrity. The "best 2 out of 3" voting was, in my opinion,
a work-around for unreliable hardware. It is based on a clever idea, but when
2 computers having the same data and logic come up with 2 different answers, I
wouldn't trust either of them.
I agree, this was a solution for hardware issues. However, hardware
issues can STILL happen today, so there is an argument for it. There
are really two ways to get to robustness: clever hardware, and clever
software. The old way was to do it in hardware, the newer way is to
do it in software (see Google with their racks of cheap motherboards).
I suspect software will always be the better way, but you can't just
write a check to get better software the way you can with hardware.
Doing it right with software means hiring really good people, which is
something a LOT of companies don't want to do (well, they think
they're doing it, but they're not).
Basically I believe the concept with the mainframe was that you could
probably open the thing up, break one random board with a hammer, and
the application would still keep running just fine. IBM would then
magically show up the next day and replace the board without anybody
doing anything. All the hardware had redundancy, so you can run your
application for a decade or two without fear of a hardware failure.
Not with todays clusters and cheap hardware. As you pointed out
expertise (and common sense) are the quintessential qualities for staff
and managers.....
However, you pay a small fortune for all of this.
Not today, that was then those absorbant prices. Sequoia made so much
money, I pretty sure that how they ultimately became a VC firm?
The other trend as
I understand it in mainframes is renting your own hardware to you.
Yes, find a CPA that spent 10 years or so inside the IRS and you get
even more aggressive profitibility vectors. Some accouants move
hardware, assest and corporations around and about the world in a shell
game and never pay taxes, just recycling assets among billionares. It's
pretty sickening, if you really learn the details of what goes on.
That is, you buy a box, and you can just pay to turn on extra
CPUs/etc. You can imagine what the margins are like for that to be
practical, but for non-trendy businesses that don't want to offer free
ice cream and pay Silicon Valley wages I guess it is an alternative to
building good software.
Investment credits, sell/rent hardware to overseas divison, then move
them to another country that pays you re-locate and bring a few jobs.
Heck, event the US stats play that stupid game with recruiting
corporations. Get and IRA career agent drunk some time and pull a few
stories out of them.....
You have seen how "democracies" work, right? :)
The more voters involved, the longer it takes for all the votes to be counted.
With a small number, it might actually still scale, but when you pass a magic
number (no clue what this would be), the counting time starts to exceed any
time you might have gained by adding more voters.
Also, this, to me, seems to counteract the whole reason for using clusters:
Have different nodes handle a different part of the problem.
I agree. The old mainframe way of doing things isn't going to make
anything faster. I don't think it will necessarily make things much
slower as long as all the hardware is in the same box. However, if
you want to start doing this at a cluster scale with offsite replicas
I imagine the latencies would kill just about anything. That was one
of the arguments against the Postgres vacuum approach where replicas
could end up having in-use records deleted. The solutions are to
delay the replicas (not great), or synchronize back to the master
(also not great). The MySQL approach apparently lets all the replicas
do their own vacuuming, which does neatly solve that particular
problem (presumably at the cost of more work for the replicas, and of
course they're no longer binary replicas).
Why Rich, using common sense? What's wrong with you? I thought you were
a good corporate lacky? Bob from accounting has already presented to
the BOD and got approval. Rich, can you be a team player (silent idiot)
just once for the team?
The way Uber created the cluster is useful when having 1 node handle all the
updates and multiple nodes providing read-only access while also providing
failover functionality.
I agree. I do remember listening to a Postgres talk by one of the
devs and while everybody's holy grail is the magical replica where you
just have a bunch of replicas and you do any operation on any replica
and everything is up to date, in reality that is almost impossible to
achieve with any solution.
Yep NoSQL is floundering mightily when requirements are stringent and
other extreme QA issues are fine-grained, from what I read. Sadly, like
yourself, I like to put on my 'common sense' glasses after an
architectural solution is presented, and I've seen mountains of bad
ideas; like BP running prudhoe bay (N. Americas largest oil field) in
the Arctic. Bad, bad idea, if you are an engineer and hang out with
those 'tards' a few days. Collected data in the arctic, microwaved it to
a mainframe in Anchorage, ran software, and then microwave controls
signals back to the field controllers. Beyond stupid.They were an
embarrassment to the entire petroleum industry back in the 70s, when I
did some automation (RF to RF) to mainframe work in the arctic. LIke
wise the solution to all of the drilling disasters, world wide, is each
country needs to provide RT date to a monitoring station, in the
government and status things like the condition of the safety and backup
safety systems (Real Time) so keep mid manager from making gargantuanly
stupid decisions. There is more than this amount of stupidity in how
many cluster (cloud companies) think large amounts of critical data will
be 'outsourced'. Bean counters scare me the most.
Sales-lizards are rarely trusted, unless they listen to me and do
exactly what I tell them to do.
It seems that there are many many tards in the cluster (cloud) space
lacking of common sense. So that (cluster/cloud) industry is going to
implode, just like the "dot-com" bubble of the 90s. Not because there is
not lots of valid projects and good ideas, but many tards are managing
and they lack the common sense to poor piss out of a boot let alone
discern valid solutions for specific industries. Like a 'blind hog'::
though they will find an acorn or two. A historical CS class or two on
what has been tried what works and does not work and why, along with a
few (real) hardware architecture classes) and there would not be so many
ridiculous (doomed to fail before getting stared) cluster (cloud)
companies out there. Developing unknown but old ideas in java, is still
going to fail. Many are the BP of the cloud:: a disaster just waiting to
fail.... ymmv. Many folks in the Petroleum industry warned Alaskan
government officials that BP was incompetent, back in the 70s.
They still are mostly becase the executives would not not how to
calculate the weight of drill stem column of fluid and match it up with
the expected subsurface pressures to be encountered. It's a simple
'material balance equation' you could teach a HS physics class.
Likewise there is a rich history (graveyard) of distributed processing
and that body of knowledge is being ignore, mostly because it is
getting in the way of vendor hyperbole......
Douglas did manage to pull his own bacon from the fire, in the end of
his article, but it wreaks of vendor hyperbole, imho.
thanks for the comments,
James