Not addressing your proposal, but one of the companies with research in this 
area..

https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html

-----Original Message-----
From: Friam <[email protected]> On Behalf Of David Eric Smith
Sent: Wednesday, April 13, 2022 1:36 PM
To: The Friday Morning Applied Complexity Coffee Group <[email protected]>
Subject: Re: [FRIAM] Selective cultural processes generate adaptive heuristics

So Glen’s line of questioning here prompts a question in me.  Partly ignorant 
(can’t be helped) and partly lazy (hasn’t been helped).

We have two things we have said at various time we would like to understand:

1. Unpacking the black box of whatever-NN representations of patterns; 

2. Getting to a “theory” of “what language is” and how we should think of its 
tokens and structures and acts in relation to not-clearly-conceived notions of 
“meaning” or “reality”, on which language is supposed to be some sort of 
window, however dirty.

So if we gave a bunch of autonomously-hosted NNs some load of work to keep them 
all busy, and offered them signal-exchange that, if they happened to, they 
could find ways to employ to coordinate each other’s autonomously-generated 
operations:

1A. Would one of the things their emergent signaling systems do be to 
constitute representations of the “inside” of the “black box” we have been 
saying we want representations of, not simply coextensive with reporting the 
input-output pairs that the black box is producing? and;

2A. Would their whole coordination system be in any interesting sense an 
instance of “language”, and since we would be able to look at the whole thing, 
and not be restricted to operating through the channel of exchanged signals, 
would there be anything interesting to learn about the inherent distortions or 
artifacts or lacunae of language, as referred to some more broadly-anchored 
senses of “meaning” or “reality”?

I could imagine that we would complain that the coordination-traffic purported 
to be representations of the black box, but that they did a terrible job of 
actually being that (or one that we couldn’t use to satisfy what _we_ want from 
a representation), but perhaps we could see some familiar patterns in the 
disappointments (?).  

To cut to a small, concrete dataset, we could try it with a kind of 
bootstrapping partition of the zero-shot translation, in which we autonomously 
host translation-learning on subsets of languages, but we keep firewalls 
between the learners so that for every learner, there are languages or language 
pairs to which it is not given direct access.  We have reason to think that 
each learner would develop a decent version of some internal meta-language, and 
that there would be coherences of structure across them, since that is what the 
zero-shot team claims already to have demonstrated.  But with different subsets 
of languages as inputs (and perhaps not less, accidents of the training path 
just due to noise), there should also be differences and things each one 
misses.  Cross-platform signaling could at least in principle have some 
available information to convey, as well as much information that it would not 
need to convey because the shared ancestry of the learning algorithms on each 
platform makes talking about it unnecessary.  The target we would be after in 
posing the problem is, to what degree is the cross-platform communication 
insightful to us about the thing we have said we want a representation of (the 
“meta-language” “within” each zero-shot learner, which is its version of some 
patterns in the world).

Has a lot of this already been done by somebody?  It seems like the first thing 
that somebody who knows nothing would propose to do, so I assume it has already 
been done to the point where people lost interest and went on to something else.

Eric



> On Apr 14, 2022, at 12:36 AM, glen <[email protected]> wrote:
> 
> But we don't "create the neural structure over and over", at least we don't 
> create the *same* neural structure over and over. One way in which 
> big-data-trained self-attending ANN structures now mimic meat intelligence is 
> in that very intense training period. Development (from zygote to 
> (dysfunctional) adult) is the training. Adulting is the testing/execution. 
> But these transformer based mechanisms don't seem, in my ignorance, to be as 
> flexible as those grown in meat. Do we have self-attending machines that can 
> change what parts of self they're attending? Change from soft to hard? Allow 
> for self-attending the part that's self-attending (and up and around in a 
> loopy way)? To what extent can we make them modal, swapping from learning 
> mode to perform mode? As SteveS points out, can machine intelligence "play" 
> or "practice" in the sense normal animals like us do? Are our modes even 
> modes? Or is all performance a type of play? To what extent can we make them 
> "social", collecting/integrating multiple transformer-based ANNs so as to 
> form a materially open problem solving collective?
> 
> Anyway, it seems to me the neural structure is *not* an encoding of a means 
> to do things. It's a *complement* to the state(s) of the world in which the 
> neural structure grew. Co-evolutionary processes seem different from 
> encoding. Adversaries don't encode models of their opponents so much as they 
> mold their selves to smear into, fit with, innervate, anastomose [⛧], their 
> adversaries. This is what makes 2 party games similar to team games and 
> distinguishes "play" (infinite or meta-games) from "gaming" (finite, or 
> well-bounded payoff games).
> 
> Again, I'm not suggesting machine intelligence can't do any of this; or even 
> that they aren't doing it to some small extent now. I'm only suggesting 
> they'll have to do *more* of it in order to be as capable as meat 
> intelligence.
> 
> [⛧] I like "anastomotic" for adversarial systems as opposed to "innervated" 
> for co-evolution because anastomotic tissue seems (to me) to result from a 
> kind of high pressure, biomechanical stress. Perhaps an analogy of soft 
> martial arts styles to innervate and hard styles to anastomose?
> 
> On 4/12/22 20:43, Marcus Daniels wrote:
>> Today, humans go to some length to record history, to preserve companies and 
>> their assets.  But for some reason preserving the means to do things -- the 
>> essence of a mind -- this has this different status.  Why not seek to 
>> inherit minds too?  Sure, I can see the same knowledge base can be 
>> represented in different ways.   But, studying those neural representations 
>> could also be informative.   What if neural structures have similar 
>> topological properties given some curriculum?  What a waste to create that 
>> neural structure over and over..
>> -----Original Message-----
>> From: Friam <[email protected]> On Behalf Of Steve Smith
>> Sent: Tuesday, April 12, 2022 7:22 PM
>> To: [email protected]
>> Subject: Re: [FRIAM] Selective cultural processes generate adaptive 
>> heuristics On 4/12/22 5:53 PM, Marcus Daniels wrote:
>>> I am not saying such a system would not need to be predatory or parasitic, 
>>> just that it can be arranged to preserve the contents of a library.
>> And I can't help knee-jerking that when a cell attempts to live 
>> forever (and/or replicate itself perfectly) that it becomes a tumour 
>> in the
>> organ(ism) that gave rise to it, and even metastasizes, spreading it's 
>> hubris to other organs/systems.
>> Somehow, I think the inter-planetary post-human singularians are more like 
>> metastatic cells than "the future of humanity".   Maybe that is NOT a 
>> dead-end, but my mortality-chauvanistic "self" rebels.   Maybe if I live 
>> long enough I'll come around... or maybe there will be a CAS mediated edit 
>> to fix that pessimism in me.
>>>> On Apr 12, 2022, at 4:29 PM, glen <[email protected]> wrote:
>>>> 
>>>> Dude. Every time I think we could stop, you say something I object to. 
>>>> >8^D You're doing it on purpose. I'm sure of it ... like pulling the wings 
>>>> off flies and cackling like a madman.
>>>> 
>>>> No, the maintenance protocol must be *part of* the meat-like intelligence. 
>>>> That's why I mention things like suicide or starving yourself because your 
>>>> wife stops feeding you. To me, a forever-autopoietic system seems like a 
>>>> perpetual motion machine ... there's something being taken for granted by 
>>>> the conception ... some unlimited free energy or somesuch.
>>>> 
> 
> --
> Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙
> 
> .-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - .
> FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn 
> UTC-6  
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2f%2f%2fbit.ly%2fvir
> tualfriam&c=E,1,DF_-tBdvJNLNIFfqMU1v9mcwgmGODkX4M7JlWYh4ImNX4KLvvQ-FVd
> 5zOeBJpWKiSrfi0s2t2a0eAMdh33OJ9W6o89d4DIvLFy_nBTqNT5qXwaieRnU,&typo=1
> un/subscribe 
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailm
> an%2flistinfo%2ffriam_redfish.com&c=E,1,1yEjl2k_jfqHvHkzJqRGva3dwRQiAs
> IrP61VinzKMgoapw97Ok643xS8JMnh5FghCmaYk5FdFjwh4G3k_BASGr6ZF_t3oiOtuP7A
> NoCwLw0B6GoHhgePs4A,&typo=1 FRIAM-COMIC 
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspo
> t.com%2f&c=E,1,WnzlhkUwjo2X_2CldTVwsChZSqrxLUOGM0CazC2OMXjR6dtiERU83kx
> zE_LVAh5Im0uwhiGPpHVJAyE4iP0DgHvE8gNd_gs468fckrN6UOHS&typo=1
> archives:
> 5/2017 thru present 
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipe
> rmail%2ffriam_redfish.com%2f&c=E,1,4l2HcFfrFSwDFD22e4i-bM0OiXXViV69Ylh
> N-XNvJS4koYJbYWZexAwB36h9Zx9HamR7MEoBUJzG_5K7gXG5tjn6R951jn0LUU0cPZWBI
> W4,&typo=1
> 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/



.-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn UTC-6  bit.ly/virtualfriam un/subscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:
 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

.-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn UTC-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:
 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to