On 08/01/2016 02:16 AM, J. Roeleveld wrote:
On Saturday, July 30, 2016 06:38:01 AM Rich Freeman wrote:
On Sat, Jul 30, 2016 at 6:24 AM, Alan McKinnon <alan.mckin...@gmail.com>
wrote:
On 29/07/2016 22:58, Mick wrote:
Interesting article explaining why Uber are moving away from PostgreSQL.
I am
running both DBs on different desktop PCs for akonadi and I'm also
running
MySQL on a number of websites. Let's which one goes sideways first. :p
https://eng.uber.com/mysql-migration/
I don't think your akonadi and some web sites compares in any way to Uber
and what they do.
FWIW, my Dev colleagues support and entire large corporate ISP's
operational and customer data on PostgreSQL-9.3. With clustering. With no
db-related issues :-)
Agree, you'd need to be fairly large-scale to have their issues,
And also have to design your database by people who think MySQL actually
follows common SQL standards.
but I
think the article was something anybody interested in databases should
read. If nothing else it is a really easy to follow explanation of
the underlying architectures.
Check the link posted by Douglas.
Ubers article has some misunderstandings about the architecture with
conclusions drawn that are, at least also, caused by their database design and
usage.
I'll probably post this to my LUG mailing list. I think one of the
Postgres devs lurks there so I'm curious to his impressions.
I was a bit surprised to hear about the data corruption bug. I've
always considered Postgres to have a better reputation for data
integrity.
They do.
And of course almost any FOSS project could have a bug. I
don't know if either project does the kind of regression testing to
reliably detect this sort of issue.
Not sure either, I do think PostgreSQL does a lot with regression tests.
I'd think that it is more likely
that the likes of Oracle would (for their flagship DB (not for MySQL),
Never worked with Oracle (or other big software vendors), have you? :)
and they'd probably be more likely to send out an engineer to beg
forgiveness while they fix your database).
Only if you're a big (as in, spend a lot of money with them) customer.
Of course, if you're Uber
the hit you'd take from downtime/etc isn't made up for entirely by
having somebody take a few days to get everything fixed.
--
Joost
I certainly respect your skills and posts on Databases, Joost, as
everything you have posted, in the past is 'spot on'. Granted, I'm no
database expert, far from it. But I want to share a few thing with you,
and hope you (and others) will 'chime in' on these comments.
Way back, when the earth was cooling and we all had dinosaurs for pets,
some of us hacked on AT&T "3B2" unix systems. They were know for their
'roll back and recovery', triplicated (or more) transaction processes
and 'voters' system to ferret out if a transaction was complete and
correct. There was no ACID, the current 'gold standard' if you believe
what Douglas and other write about concerning databases.
In essence, (from crusted up memories) a basic (SS7) transaction related
to the local telephone switch, was ran on 3 machines. The results were
compared. If they matched, the transaction went forward as valid. If 2/3
matched, and the switch was was configured, then the code would
essentially 'vote' and majority ruled. This is what led to phone calls
(switched phone calls) having variable delays, often in the order of
seconds, mis-connections and other problems we all encountered during
periods of excessive demand.
That scenario was at the heart of how old, crappy AT&T unix (SVR?) could
perform so well and therefore established the gold standard for RT
transaction processing, aka the "five 9s" 99.999% of up-time (about 5
minutes per year of downtime). Sure this part is only related to
transaction processing as there was much more to the "five 9s" legacy,
but imho, that is the heart of what was the precursor to ACID property's
now so greatly espoused in SQL codes that Douglas refers to.
Do folks concur or disagree at this point?
The reason this is important to me (and others?), is that, if this idea
(granted there is much more detail to it) is still valid, then it can
form the basis for building up superior-ACID processes, that meet or
exceed, the properties of an expensive (think Oracle) transaction
process on distributed (parallel) or clustered systems, to a degree of
accuracy only limited by the limit of the number of odd numbered voter
codes involve in the distributed and replicated parts of the
transaction. I even added some code where replicated routines were
written in different languages, and the results compared to add an
additional layer of verification before the voter step. (gotta love
assembler?).
I guess my point is 'Douglas' is full of stuffing, OR that is what folks
are doing when they 'role their own solution specifically customized to
their specific needs' as he alludes to near the end of his commentary?
(I'd like your opinion of this and maybe some links to current schemes
how to have ACID/99.999% accurate transactions on clusters of various
architectures.) Douglas, like yourself, writes of these things in a
very lucid fashion, so that is why I'm asking you for your thoughts.
Robustness of transactions, in a distributed (clustered) environment is
fundamental to the usefulness of most codes that are trying to migrate
to a cluster based processes in (VM/container/HPC) environments. I do
not have the old articles handy but, I'm sure that many/most of those
types of inherent processes can be formulated in the algebraic domain,
normalized and used to solve decisions often where other forms of
advanced logic failed (not that I'm taking a cheap shot at modern
programming languages) (wink wink nudge nudge); or at least that's how
we did it.... as young whipper_snappers bask in the day...
--an_old_farts_logic
curiously,
James