I don't know quite how to answer your main question. But I'll lob a ton of 
words at it anyway. We *are* developing asynch computing. The IoT domain might 
be the most relevant place such things are happening. The problem is the 
advances just aren't as sexy as [movie trailer voice] Quantuuummmm 
Mechaaannniiicssss. I like Eric's comment from awhile back that words like 
"entanglement" are problematic 
(http://friam.471366.n2.nabble.com/A-question-for-tomorrow-td7593073i100.html). 
 In distributed computing (DC), we have similarly problematic words like 
"cloud" or "fog" or "edge" or whatever the new iterations are. [sigh]

But again going back to the archives in that same thread, we've already talked 
a bit about the relatively subtle relationship between QC and DC. In many ways, 
QC *is* a form of DC, just not, as you rightly point out, asynchronous in our 
vernacular understanding of time- or clock-synced. Maybe a more general term 
like "coupling" could be used? Clock-coupled computations can be distributed 
over space and space-coupled computations can be distributed over time. This 
implies a 2D plane with clock coupling on one axis and space coupling on 
another. Fully space-time decoupled computation might be in the lower left 
quadrant and fully space-time coupled computations in the upper right. I'd 
argue that QC is time-coupled and (a little bit) space decoupled, kindasorta 
like a GPU. IoT can be fully decoupled, each tiny computer in each far-flung 
location running its own (fast or slow) clock.

To toss in more word salad, time and space are not the only ways we can 
[de]couple things. The distinction between grid computing and heterogeneous 
computing is pretty interesting to me, both personally and professionally. One 
of my pet peeves (well exhibited in this forum, I think) is the *prejudice* we 
show for classical math. My biological modeling colleagues are WAY into 
math-porn. They love building models founded in differential equations and 
trying to match them up with data from the lab. I (like to believe I) am fully 
agnostic and willing to implement *anything*, even if it's not based on a 
formalism at all ... even if it's just cobbled together nonsense. (I lament the 
hegemony of formalism in artificial life, for example. Early on, we had 
arbitrary people with no expertise in anything even remotely formal showing 
their cobbled together *art* that kindasorta looked "alive".) As such, I 
advocate for (jargonally offensive) multi-paradigm models, where different 
*logics* are glued together to produce a Frankenstein's monster multi-model. 
Grid computing always seemed something like a mono-crop, catastrophically 
susceptible to the slightest infection. As Hewitt points out (e.g. 
https://hal.archives-ouvertes.fr/hal-01148293/document), the unification of 
large-scale information systems into singular foundations (supportive of strong 
defintions of consistency) are not *merely* problems of engineering. They have 
philosophical dimensions. (Again, which manifests in my skepticism surrounding 
"monism". Nick is not merely an infidel, he's a *heretic* ... out there trying 
to convert us faithful pluralists to his heresy.)

So, even in the tamest subculture of DC, the multi-modeling that is hybrid or 
"cyber-physical" systems, we see bad ass advances in how to couple and decouple 
continuous with discrete sub-components, discrete time with discrete event 
submodels, etc. But such advances don't get headlines on your favorite "news" 
website.


On 9/25/19 9:38 AM, Gillian Densmore wrote:
> ok ok let me see if my average nearness understands the latest advances: 
> Regular computers: do a lot of things. one at a time. 
> Qbit Computing: does a lot tasks at once, but in some somewhat random way, so 
> it can actually take longer to do.
> However: googles skunk works has a few that are much faster than before. So 
> while still random, it's just much faster at randomness...or has less 
> randomness in it?
> 
> This leads me to consistent question: is their a reason we don't also have 
> asynchronous computionatal power vastly better developed then? oO That's 
> basically why the GPU is so effing awesome for a lot of things. But the 
> CPU(S) and a lot of TPU hit hardcore bottlenecks.
> I ask while Qbits have an amazing amount of potential- and my/ or may not do 
> what we'd hope they can. 
> 
> So why not also make bad ass asynchronous computing more of a thing as well?
> 
> On Tue, Sep 24, 2019 at 3:16 PM Roger Critchlow <r...@elf.org 
> <mailto:r...@elf.org>> wrote:
> 
>     Most clarifying.
> 
>     On Tue, Sep 24, 2019 at 2:55 PM glen <geprope...@gmail.com 
> <mailto:geprope...@gmail.com>> wrote:
> 
>         https://www.scottaaronson.com/blog/?p=4317

-- 
☣ uǝlƃ
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to