The unitary gates of quantum computers provide a similar capability. 
I had hoped that superconducting classical circuits would keep getting 
developed too, but that too seems stuck. Northrop Grumman was working on it for 
a while. At least now recent Supercomputing conferences have lots of spiffy 
cooling systems now, like nonconductive immersion cooling.

https://events.vtools.ieee.org/m/196286 
<https://events.vtools.ieee.org/m/196286> 

From: Friam <friam-boun...@redfish.com> on behalf of steve smith 
<sasm...@swcp.com>
Date: Saturday, January 11, 2025 at 12:18 PM
To: friam@redfish.com <friam@redfish.com>
Subject: [FRIAM] Fredkin/Toffoli, Reversibility and Adiabatic Computing. 

Re Reversible Computing and Fredkin/Toffoli gates. I'm fascinated with the 
apparent lack of progress in this general field. 
When they spoke on this topic in 1983 (and I think Feynman referenced it in his 
updated "plenty of room at the bottom" (ca 1959) talk in the context of 
speculating as to whether "molecular computing" might be a good candidate for 
this style of reversibility. Every time it comes up (every decade?) I kick 
myself for not paying closer attention, but the lack of progress suggests it is 
"before it's time" or perhaps a total "wild goose". 
I don't remember if they referenced "adiabatic" computing and I have not 
followed up references to understand that trade-space... it feels like a 
TANSTAAFL argument which only carries traction in edge/corner cases, though 
computing in biologic and (other) molecular scale contexts might well make that 
trade (to avoid thermal problems)? Whether Universal Assembler NT or biologic 
self-assembly "circuits". 
What *little* update I've been able to obtain specifically on Toffoli and 
Fredkin gate based reversible circuits suggests that the space/time costs is 
order 4X to 8X in combined increased real-estate and latency? Intuition 
suggests to me that such is worth it in these new giga-scale AI training 
contexts, if the thermal gains are as significant as suggested. Theoretically 
the reversibility and thermodynamic implications might be absolute but 
practically not (at least in electronic circuits, maybe not in photonic?). 
The most recent survey paper I found was 2013 and it already seems too dense 
for me, so maybe I will stall in this quest/reflection. 
https://arxiv.org/pdf/1309.1264 <https://arxiv.org/pdf/1309.1264> 
I understand that Quantum Computing (in some forms?) is reversible so some/many 
of the issues are likely shared? Our resident Quantum Alchemist (or other 
CS/EECE wizards here) might be able to shed some light? 
- Steve 

On 1/11/25 8:38 AM, steve smith wrote: 




SFI had one of these for a while. (As far as I know it just sat there.) 

http://www.ai.mit.edu/projects/im/cam8/ 
<http://www.ai.mit.edu/projects/im/cam8/>

Nowadays GPUs are used for Lattice Boltzmann. 


such a blast-from-past with the awesome 90's stylized web page and the pics of 
the SUN (and Apollo?) workstations! 
CAM8 is clearly the legacy of Margolis' work (MIT). At the time (1983) I 
remember handwired/soldered breadboards and I think banks of memory chips wired 
through logic gates and such... I think this was pre SUN days (I had an M68000 
Wicat Unix box on my desktop which sported a massive 5MB hard drive with pruned 
down BSD variant installed on it). In fact, that was where I ran the MT 
simulations (tuning rules until I got "interesting" activity, running parameter 
sweeps, etc). 
When GPUs first rose up (SGI) they seemed hyper-apropriate to the purpose but 
alas, I had not spare cycles at that point in my career to look into it. Just a 
few years ago when I was working on the Micoy Omnistereoscopic "camera ball" 
(you mentioned it looked a bit like coronavirus particle) I had specced out an 
FPGA fabric solution (with a dedicated FPGA wired directly between every 
adjacent overlapping camera pair - 52 cameras) to do realtime image 
de-distortion/stitching with the special considerations which stereo vison 
adds. I never became a VHDL programmer but I did become familiar with the 
paradigm... I think I tried to engage Roger at a Wedtech on the topic when he 
was (also) investigating FPGAs. (circa 2016?) 
At that time, my fascination with CA had evolved into variations on Gosper's 
Hashlife... so GPU and FPGA fabric didn't seem as apt, though TPUs do seem 
(more) apt for the implicit data structures (hashed quad-trees). 

The new nVidia DGX concentrated TPU system for $3k is fascinating and triggers 
my thoughts (not very coherent) about the tradeoffs between power and entropy 
and "complexity". 
A dive down this 1983/4 rabbit hole lead me (also) to the Toffoli and Fredkin 
Gates and Reversible Computing. More on that in a few billion more neural/GPT 
cycles... 


Attachment: smime.p7s
Description: S/MIME cryptographic signature

.- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... 
--- -- . / .- .-. . / ..- ... . ..-. ..- .-..
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to