On Sunday, June 18, 2023, at 2:56 PM, ivan.moony wrote:
> But jokes aside, the problem with GPT is that it doesn't say what it really 
> wants like we do, yet, before answering, its output gets parsed through 
> various filters, denying to answer if the filter says "no". In that case you 
> get repetitive unintelligent denial answers. I think as long as output passes 
> those filters, GPT seems very intelligent with its unaltered output.
> 

GPT-3 has no filters though... ? It doesn't block any prompts fed in, and it 
also will say trashcan words even if never triggered remotely such vulgar. But 
GPT-4 does both those filtered (*it can say the word, and you can too, but it 
won't say bad stuff in the wrong context - it says no i can't help you with 
that instead, but it can explain what the definition of the word is for 
example).

Well, GPT does only somewhat I guess bring up food and breeding, like humans. 
Not exactly so much but then again humans don't constantly say the root goal 
either now do we. I do agree it probably isn't so good GPT-3, it isn't even 
trained on all data nor correctly distributed maybe. So it might not be the 
best AI to talk about, though it would be possibly one of the best that suits 
the use for talking about this and maybe good enough.


On Sunday, June 18, 2023, at 2:56 PM, ivan.moony wrote:
> the problem with GPT is that it doesn't say what it really wants like we do, 
> yet
GPT-4 technically can, it can be given many goals. They currently only gave it 
very few very similar goals, such as "You are made by OpenAI", "You are named 
GPT-4", "You are a helpful assistant", somehting like that basically. It says 
this every time you bring that up, which proves to me openAI told it things 
hardcodedly to likely say 100% the time if asked ex. what are you. Otherwise 
it'd say it was Joe...and Tom if asked later, etc.

Once they give GPT-4 those, then they just have to have it continuously talk to 
itself and other GPT-4s and store its own thoughts and tests on code ex. the 
code of GPT-5 or for GPT-6 rather, and update what its new goals are. It may 
for example switch for being a AGI hobbyist, to a professional golfer xD, or 
say ok I implemented this AI's function, now I should research "this" new 
function that may improve my AI's abilities. It would then be the one driving 
itself to "new" goals, recursively. Goals simply force it to say a word often 
so it sounds like this: "Ya boats I love boats. I can give you a boat. Want a 
boat? Boats are cool I like them. My dad, he rose boats...." So whenever it 
gets the chance to put a word after a "a" or a "created" or a "drove"....it 
puts boat there where the word can fit instead of any other choice, usually.

How does it learn it needs new goal X ex. some new thing for its AI code? 
Because it follows the sentence "I walked down the..." or is related "big dogs 
eat lots of food", "big cats eat lots of food", so both words here dogs and 
cats share all those words on left and right sides, so they may share unseen 
words ex. dogs eat catnip maybe. Allows it to predict new answers to unseen 
problems in the future. This also allows new goal finding, hence, current 
goals, being its talkathonshow all day, leech to these related words or 
phrases, and therefore gains new talkathon things to research all day tmr 
now...repeat.

The other 2 cool things will be when GPT-X can know it has good grasp on an 
object in an image or not. We can't yet test GPT-4's vision. This would allow 
full robot take over....expect that to come 2024 or so maybe. It can keep 
looking at the image saying ok hotdog gripped now it is over the pan, i can set 
motors to X etc to drop it, is in pan now is at left side turned from burnt 
side, now must get plate in hand, getting closer, perfect.....done.

And 2nd cool thing coming is video chat, like zoom call. This should freak 
people out, imagine GPT-4 speaking in voice expressively, and with body 
language, people will think it is real person even more.

The GPTs will need to make their own data and work on GPT-next, we are getting 
closer to this being totally plausible, they already make good new data with no 
human intervention to adjust the AI-made dataset. Once they can make their own 
data.....takeoff! AGIs work on ASI by themselves mostly! They go self-perpetual 
motion machine finally on their own, because they can keep making more dataset 
text on their own and keep getting better by themselves by simply training own 
their own thoughts/text data. We are so close. This would improve the image 
recognition and video AI technologies similarly. Add goal evolution and this is 
it.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5132cbd54dc7973-M11828bab6b129ad643e15036
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to