On Thursday, September 19, 2024, at 7:51 AM, Matt Mahoney wrote:
> How do you think microtubules affect the neural network models that have been 
> used so effectively in LLMs and vision models? Are neurons doing more than 
> just a clamped sum of products and adjusting the weights and thresholds to 
> reduce output errors?

Artificial neurons are simplified known models of real neurons though they 
getting more mathematically enhanced. MTs though are molecularly granular and 
there are unknown electrochemical dynamics with suspected quantum optimizations 
involved. LLM’s deal much with fixed characters in static alphabets whereas the 
human mind operates more with fuzzy characters in dynamic alphabets of higher 
symbol complexity outside of natural language IMO so our reasoning leaves much 
yet to be imitated and exceeded. Understanding MT electronics may lead to new 
artificial models.

For example: View evolution and civilization as complex, dense and dynamic 
electronic schematics. There are many yet to be known circuits and regions 
partially due to a deficit of understanding of MTs.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tff6648b032b59748-M0115c7083874da9347958bcd
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to