On Mon, Dec 16, 2024 at 3:02 AM mm ee <csmicahelli...@gmail.com> wrote:
> I am trying to understand, given the post is very vague. 1) What is this > supposed to do differently from what we have today, 2) have you tried it? > What new problems did it solve?, and 3) what resources would one need to > run it themselves? I'm not looking for detailed architectural explanations, > but I feel like I'm looking at a Rube Goldberg machine > 1) OpenCog and OpenNARS are logic-based AI systems with a "knowledge base" made up of logic rules. Symbolic logic rules operate by discrete mechanisms such as string matching, variable substitutions etc. What I do is make these functions differentiable, so they can be implemented as neural networks. The new logic rules would be "floating around" in parameter space. Gradient descent will find the best set of rules. 2) I am planning to try it on the game of TicTacToe, which I'm an expert 😄. The ultimate goal of course is to train it as LLM. 3) I chose TicTacToe because I can run it on my (rather low-end, without GPU) computer. For LLM we may pick a relatively small dataset just to get a proof-of-concept. Thanks for your interest, we can talk more if you want to work on this too 🙂 ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T8685950780e86bd5-M29ee044950acadbe0c8c8afa Delivery options: https://agi.topicbox.com/groups/agi/subscription