It seems like there are two separate questions here.

Steve talked about reversible gates, and suggested them as solutions to heat 
wastage.  But I think that doesn’t go through.  I too thought of Marcus’s point 
about unitary quatum gates as the obvious case of reversibility (needed for 
them to function at all for what they are).  But quantum or Toffoli or Fredken, 
the point of the Landauer relation et al. is that you can move around where the 
dissipation happens (out of the gate and into somewhere else), but 
reversibility itself isn’t obviating dissipation.  (f it is to be obviated, 
that is a different question; I’ll come back in a moment to say this more 
carefully.)

The different matter of superconducting or other less-wasteful gates seems to 
be about _excess_ dissipation that can be prevented without changing the 
inherent constraints from the definition of what the computational program is.


So back to explaining the first para more carefully:  As I understand it (so 
offering the place to tell me I have it wrong), the point is that we use a 
model in which the state space is a kind of product-space of two ensembles.  
One we call the data ensemble, and its real estate we have to build and then 
populate with one or another set of data values.  The other is the thermal 
ensemble which gets populated with states that we don’t individually control 
with boundary conditions, but control only in distribution by the energy made 
available to them.

Then what is the premise of computation?  It is that every statement of a 
well-formed question already contains its answer; the problem is just that the 
answer is hard to see because it is distributed among the bits of the question 
statement, along with other things that aren’t the answer.  If we maximally 
compress all this, what a computation is doing is shuffling the bits in a 
one-to-one mapping, so that the bits constituting the answer are in a known set 
of registers, and all the slag that wasn’t the answer is in the remaining 
registers.  In a reversible architecture, that can be done in isolation from 
the thermal bath, so no entropy “production” takes place at all.

But the slag is now still consuming real estate that you have to re-use to do 
the next computation, and even the answer has to get moved somewhere 
off-computer to re-use that part of the real estate.  If the slag is really 
slag, and you just want to get rid of it, then you are still going to offload 
entropy to somewhere in doing so.  Not in the gates that did the computation, 
maybe, but in the washer that returns clean sheets for the next day.  If we 
stay within the representational abstraction that we have only the two 
ensembles (data and thermal), then every version of that dissipates to heat.


The reason I said that whether dissipation is unavoidable or not is “a 
different question”, rather than “already known”, is that it is not obvious to 
me that one _must_ “dispose” of the slag that wasn't the “answer-part” of your 
first question.  Maybe it isn’t true slag, but part of articulation of other 
questions that request other answers.  One might imagine (to employ the 
metaphor of the moment), “sustainable” computation, whereby all slag gets 
reversibly recycled to the Source of Questions, to be re-used by some green 
questioner another day.

That’s a fun new problem articulation, but I think the imagination that it 
solves anything just displaces the naivete to another place.  One might make 
computation “more sustainable” by realizing that there will be new questions 
later, and saving input bits in case those become useful.  But there is no 
“totally sustainable computation” unless we are sure to ask all possible 
questions, so that every bit from the Source is the answer to something that 
eventually gets used.  No Free Lunch kind of assertion.  This is Dan Dennett 
World where volition is modeled by deterministic automata.  But Dennett world 
is not our world: everything we do works because we are tiny and care about 
only a few things, with which we interact stochastically, and the world 
tolerates us in doing so.  In that world, returning the slag to the Source of 
Questions should create a kind of chemical potential for interesting questions, 
in which, like ores that become more and more rarified, finding the interesting 
questions among the slag that one won’t dispose of gets harder and harder.  So 
there should be Carnot-type limits that tell asymptotically what the minimal 
total waste could be to extract all the questions we will ever care about from 
the Source of Questions, retuning as much slag as possible over the whole 
course, and dissipating only that part that defines the boundaries of our 
interest.  That Carnot limit could be considerably less wasteful than our 
non-look-ahead Landauer bound, but it isn’t zero.  And the Maxwell Deamon cost 
of the look-ahead needed to recycle the slag in an optimal manner presumably 
also diverges, by a block-coding kind of scaling argument.

Could be a delightful academic exercise, to work out any tiny model to 
illustrate this concept.  If I were smart and had time, I would want to do it.  
But then those with social urgency would chop my head off too, in the next 
French Revolution, for having wasted time doing academic things when I should 
have been providing a more useful service.  (Sorry, between meetings and the 
incoming emails over the past few days, I have been spending lots of time with 
those who think that the reason there are still problems in the world is that 
we let go of the Struggle Sessions too early.  I can’t argue that they are 
wrong, so I am keeping my head down in public.)

Eric





> On Jan 12, 2025, at 6:07, Marcus Daniels <mar...@snoutfarm.com> wrote:
> 
> The unitary gates of quantum computers provide a similar capability.  
> I had hoped that superconducting classical circuits would keep getting 
> developed too, but that too seems stuck.  Northrop Grumman was working on it 
> for a while.   At least now recent Supercomputing conferences have lots of 
> spiffy cooling systems now, like nonconductive immersion cooling.
> 
> https://events.vtools.ieee.org/m/196286
>  
> From: Friam <friam-boun...@redfish.com <mailto:friam-boun...@redfish.com>> on 
> behalf of steve smith <sasm...@swcp.com <mailto:sasm...@swcp.com>>
> Date: Saturday, January 11, 2025 at 12:18 PM
> To: friam@redfish.com <mailto:friam@redfish.com> <friam@redfish.com 
> <mailto:friam@redfish.com>>
> Subject: [FRIAM] Fredkin/Toffoli, Reversibility and Adiabatic Computing.
> 
> Re Reversible Computing and Fredkin/Toffoli gates.   I'm fascinated with the 
> apparent lack of progress in this general field.   
> 
> When they spoke on this topic in 1983 (and I think Feynman referenced it in 
> his updated "plenty of room at the bottom" (ca 1959) talk in the context of 
> speculating as to whether "molecular computing" might be a good candidate for 
> this style of reversibility.     Every time it comes up (every decade?) I 
> kick myself for not paying closer attention, but the lack of progress 
> suggests it is "before it's time" or perhaps a total "wild goose".
> 
> I don't remember if they referenced "adiabatic" computing and I have not 
> followed up references to understand that trade-space... it feels like a 
> TANSTAAFL argument which only carries traction in edge/corner cases, though 
> computing in biologic and (other) molecular scale contexts might well make 
> that trade (to avoid thermal problems)?   Whether Universal Assembler NT or 
> biologic self-assembly "circuits".
> 
> What *little* update I've been able to obtain specifically on Toffoli and 
> Fredkin gate based reversible circuits suggests that the space/time costs is 
> order 4X to 8X in combined increased real-estate and latency?   Intuition 
> suggests to me that such is worth it in these new giga-scale AI training 
> contexts, if the thermal gains are as significant as suggested.  
> Theoretically the reversibility and thermodynamic implications might be 
> absolute but practically not (at least in electronic circuits, maybe not in 
> photonic?).
> 
> The most recent survey paper I found was 2013 and it already seems too dense 
> for me, so maybe I will stall in this quest/reflection.   
> 
> https://arxiv.org/pdf/1309.1264
> 
> I understand that Quantum Computing (in some forms?) is reversible so 
> some/many of the issues are likely shared?  Our resident Quantum Alchemist 
> (or other CS/EECE wizards here) might be able to shed some light?
> 
> - Steve
> 
>  
> 
> On 1/11/25 8:38 AM, steve smith wrote:
> 
> 
> SFI had one of these for a while.  (As far as I know it just sat there.)
>  
> http://www.ai.mit.edu/projects/im/cam8/
> 
> Nowadays GPUs are used for Lattice Boltzmann.
>  
> 
> such a blast-from-past with the awesome 90's stylized web page and the pics 
> of the SUN (and Apollo?) workstations!
> 
> CAM8 is clearly the legacy of Margolis' work (MIT).   At the time (1983) I 
> remember handwired/soldered breadboards and I think banks of memory chips 
> wired through logic gates and such... I think this was pre SUN days (I had an 
> M68000 Wicat Unix box on my desktop which sported a massive 5MB hard drive 
> with pruned down BSD variant installed on it).  In fact, that was where I ran 
> the MT simulations (tuning rules until I got "interesting" activity, running 
> parameter sweeps, etc).
> 
> When GPUs first rose up (SGI) they seemed hyper-apropriate to the purpose but 
> alas, I had not spare cycles at that point in my career to look into it.  
> Just a few years ago when I was working on the Micoy Omnistereoscopic "camera 
> ball" (you mentioned it looked a bit like  coronavirus particle) I had 
> specced out an FPGA fabric solution (with a dedicated FPGA wired directly 
> between every adjacent overlapping camera pair - 52 cameras) to do realtime 
> image de-distortion/stitching with the special considerations which stereo 
> vison adds.   I never became a VHDL programmer but I did become familiar with 
> the paradigm... I think I tried to engage Roger at a Wedtech on the topic 
> when he was (also) investigating FPGAs.  (circa 2016?)   
> 
> At that time, my fascination with CA had evolved into variations on Gosper's 
> Hashlife...  so GPU and FPGA fabric didn't seem as apt, though TPUs do seem 
> (more) apt for the implicit data structures (hashed quad-trees).
> 
>  
> 
> The new nVidia DGX concentrated TPU system for $3k is fascinating and 
> triggers my thoughts (not very coherent) about the tradeoffs between power 
> and entropy and "complexity".
> 
> A dive down this 1983/4 rabbit hole lead me (also) to the Toffoli and Fredkin 
> Gates and Reversible Computing.   More on that in a few billion more 
> neural/GPT cycles...
> 
>  
> .- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... 
> --- -- . / .- .-. . / ..- ... . ..-. ..- .-..
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam&c=E,1,KWgOuuDp3sJ19wSSNmdf1vIZjFkL6LdnivoT3B33QXG9grw4mt3zHREkQm5FzUAxd07KeKr5L_V6-UuIf8bgvqzFLxELPCd72H7MillJAa8,&typo=1
> to (un)subscribe 
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,3gVTC2DZssSbzLPzIU4DWwxy4JjzcUg1mhFyRwmd7wIp-Gsk0NonKGPe3QIOlE7HLpplg8MRHzBWVPMuNPCX8tZOavZie9bfGeDRnWbpv8M,&typo=1
> FRIAM-COMIC 
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,YGqY2QOBnu8FvGusYdepBUjzQWZmOMorrIWRVOIxATXVystVbsXmeGjILQQBOJdRUUo3YHUzKjxeDpIqXUDfyu1Wm9vbmC5f1AYr8OC9wN2q&typo=1
> archives:  5/2017 thru present 
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f&c=E,1,pdzGSgGcu0KH50DGCsagcGOrc-EJK0Nfd1CatZXG5gdqL1KIVEI-w9OkIjBGVrzqi9ZLCUaM9GnkXHdHI8NlxCHAr_ixMvE5QprUFqpHJg,,&typo=1
>  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

.- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... 
--- -- . / .- .-. . / ..- ... . ..-. ..- .-..
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to