my HOW TO > >
One said you'd need full AGI to solve that above. Remarkably, I say you need only a small part of AGI to ace it. How odd isn't that? To get 2 very opposing views in the same week! I'm thinking about coding it in the coming months. I believe I can ace it in accuracy using only 1 example of what A looks like. All these distortions of the A are just location and brightness offsets. Rotate an A, stretch that A, blur it too, flip it, brighten parts of it, upsize it, remove color, rotate parts of it, etc. A human, given 1 example pic of a never before seen object ex. a elephant-dragon-frog, will easily be able to recognize it later despite many distortions. All the mentioned distortions to the A ex. rotate/ blur/ etc are solved by a few tricks. To illustrate this, imagine we see the A now, it is same, but is much brighter, now when each pixel is compared to our stored copy, it is off tons, not same brightness, but the amount off error is same for the rest of the pixels, they are ALL 5 shades brighter, so not so bad sanction it will get then. Making the A noise would not be such, each would relatively be different. Do this for each layer and you have an efficient network. For location it is same. The simplest part to my to-be AGI is recognition, everyone's just doing AI wrong.... Recognizing the A is simple per see but the distortions make it less matching, right, so, but most (99%) the A is there and relatively no different between its parts. That is the pattern in recognizing A. > Similar pixel brightness, location, and similar relative error expectation ex. hey i off by 2 shade and my bro is off by 4 shades we similar bro! If this works, it will start everything I think!! ID finally tackles Vision omg! It' be my 2nd algorithm I coded then. ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T759eb6f9d5c84273-Mdde7f19b2b2a5a17d583c7df Delivery options: https://agi.topicbox.com/groups/agi/subscription
