On Thursday, May 28, 2020, at 11:38 AM, Alan Grimes wrote:
> Let's think about this in terms of a problem stack
Ok

On Thursday, May 28, 2020, at 11:38 AM, Alan Grimes wrote:
> We place at the
bottom of the problem stack:

"Well I decided to deviate from my daily routine because Reasons". (We
use Reasons to cut off a much lengthier discussion of causality...)

The next line is "I started doing X but ran into Problems so I started
working on ...."
Um, this is just an example I'm guessing? Ok. I get it.

On Thursday, May 28, 2020, at 11:38 AM, Alan Grimes wrote:
> So what is your problem stack that got you to working on language
compressors and why is it the best path to AGI?
So you want some history of the direction I've taken? Ah I see. Well I've 
wanted immortality etc for years and am sure AGI is the next big step in 
Evolution. So I'll skip to AGI since we're all interested in that anyway. But 
first: The only other way in my eyes to immortality is Cryonics, and I have 
ideas how to improve it, but I have not the equipment. We already do Cryonics 
and it already does keep your global form. That goes to show it's a valuable 
path that is "easy". Again, I don't have the lab else I'd TRY, LOTS. One test 
is different sized nitrogen containers compared, if its bigger then the heat 
won't bounce back into the body and should freeze faster, we want it to leave 
and not come back. We need an accurate evaluation for better freezing too. I 
could spend time on this if others helped try novel tests I could come up with.

So, AGI. I've been working 24/7 for 5 years, never a break. I jump out of bed 
and get right to the computer after eating. At some point you realize all AI 
networks do is learn patterns in data. They learn features/ representations. 
They learn a "model"/ hierarchy. Features build larger longer features. I 
already told you Alan, it's not a "compressor I'm working on", that's the 
evaluation used to test the network/predictor. I could use Perplexity, or the 
Turing Test. What it is, is a network, a web/ hierarchy. My algorithm / the 
theory of AI is understanding new data using previous experiences. It can 
predict what the next word is if shown a novel question. And it is not 
"language" as you call it either, all data such as vision or music works the 
same way right to the technical bits and guts, you can predict what comes next 
or do Translation.

So what I'm working on is not compression nor language, but a data network. 
Note the net itself compressing is a totally different compression, that helps 
the network model learn patterns, things it doesn't have to store again. The 
evaluation-compression on the other hand tests the Predictor's ability in 
knowing what comes next in a feature/ sentence.

So your question is really and absolutely and only "So what is your problem 
stack that got you to working on neural networks and why is it the best path to 
AGI?".

My work explains / builds on others. I not only explain exactly what GPT-2 and 
PPLM and Blender is doing underlying the algorithm, but how to bring us much 
closer to AGI.

THE WRONG WAY: I started off very very naively (but with good intentions!!). I 
was drawing roomba robots that would learn to crawl by randomly trying to move 
and tweaking actions that worked. A recursive update to "how to walk faster". I 
stopped, I realized AGI needs a lot of "considerations of things IOT style". 
Lots of context, big data. If it wants to walk faster, it may need to return 
home and not cross the finish line, build a rocket, and figure out how to steal 
a warehouse truck. I needed a thinker, not a walker robot body. The brain comes 
first, not the body...brain is where the erm "magic" happens. Brain is brain, 
not motors. You'll see where the body output comes into play though hold on. 
Output is a sensless thing without feedback, it's a loop that updates your 
domain attention.

THE MODEL BASE: I was learning a lot how the model prediction / translation 
works. It can robustly recognize sentences or images or music despite similar 
objects/ words being in there and in different positions. There's more to this, 
as I showed in my videos. It allows it to understand cause and effect, to 
recognize unseen features as well. A model understands what can happen or what 
is what, and by how much so (%).

MODEL DRIVER/ DESIRES/ STRONG ATTENTION DOMAINS FORCED: I learnt later how to 
force the model to talk/ask about certain things, like a GPT-2 that utters all 
day "I'm immortal by?" "I can become immortal if we?" "I will survive by?"

AN EVOLVING GOAL, A NEW DATA SEEKER: Although I knew this I didn't see it as 
clear as recently. AGI needs to intake our current data, and pay attention to 
certain areas and generate (or collect) desired data from specific sources like 
certain websites, people, or questions to itself. It updates where to do this 
too, like a robot learning how to walk faster, it modifies it's goals evolving 
to a more narrow specialized domain ex. survival > food > money > job > boss > 
smiles > plastic surgery > plastic tool production, and starts generating data 
from those domains. It must nest back and answer the question once it gathers 
data from similar domains, I mean it's gathering data it needs to answer the 
real question in the end. Finding a path is important. To become immortal I 
need AI, AI needs nanobots, nanobots need energy, the goals/domains are similar 
and update. And AGI must recognize desired outcome milestones along the way or 
hold onto partially supported data by collecting loads of more partially 
likely/true data

My work is simple. Unifies lots. I know a lot of things from nanobots to the 
future to activation functions. I've made that book and multiple movies. I 
build large note files of all my thoughts/ discoveries.

AGI, needs a lot of data and a lot of attention when making a good decision. My 
design has 3 attention types: permanent (reward agenda steering) which can leak 
to related domain features to force it to talk on its own about sub-goals, 
Temporary energy activation (recently heard words likely to appear again 
(similar words are boosted as well, needs little data to work)), and long term 
memory storage of what usually follows some context during prediction or 
translation or byte pair encoding Segmentation. My design creates long term 
memories by frequent accesses and wires together features heard in close time. 
That's natural learning. My design also explains we prune not used nodes to 
forget them and Pooling as well during Mixing predictions in the net, among 
other things.

AGI need a oddly abstract reasoning and fetch ability. It must do logical AND 
OR NOR, fetch or await some letter in a book, track time every hour checks, ask 
people questions instead of thinking about doing so (trigger real motor 
speech), must hold onto thoughts if decides it just must do so, carry over 
numbers mentally, and recognize the correct answer in the following by matching 
or entailing sentences of things: Those witches who were spotted on the house 
left in a hurry to see the monk in the cave near the canyon and there was the 
pot of gold they left and when they returned back they knew where to go if they 
wanted it back. They knew the keeper now owned it and if they waited too long 
then he would forever own it. Who owns what?
Answers: Witches own monk/witches own canyon/monk owns gold/monk owns 
house/monk owns cave/cave owns pot/there was pot/he owns it

AGI can translate, summarize, extend, segment data features. Extend and 
summarize are the same thing, you elaborate more if say more less important 
filer words, else say just the most desired, recent, frequent, and related word 
to summarize the more likely thing(s). Translation is recognition, it is 
similar sentence, and is stable just between summarize and elaborate and stays 
the sameish length when spoken.

The brain is a data seeker...it can predict well....it finds desired answers to 
questions without coding/ implementing anything, all by large data evidence to 
find likely candidate leads that should occur in probablistical 
likelihood.....sometimes it does find a really good answer and it does 
everything as well - God, it finds a hole in the ground and can't even 
backtrack, a local but not global optima.

I love Generative models. Because we need new, desired data to long standing 
root underlying questions like food, survival, sex, and shelter. We need AGI to 
run through desired paths in the net and know how to do so sensically. It must 
babble correctly, and babble about the desired domains
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T224cb80d0cc8b0e7-M9be9615547a48609d9c0826d
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to