I think Newton's method of finding a root could provide another class of
compression systems. With a variance of the function itself, I think the
range of possible systems could be expanded in interesting ways. I don't
know if that could be useful. I am really looking for something that can be
used to simplify complicated (and complex) ordering functions.

On Tue, Jun 25, 2019 at 8:20 AM Jim Bromer <[email protected]> wrote:

> Brett,
> Steve has been talking about something similar. I understand the value of
> being able to add and subtract rates or ratios as a substitute for
> multiplication and division but I am wondering if this might be used to
> alleviate fundamental problems in comparing discrete states. I also might
> be able to use something like that in my idea of a mathematical index. When
> using log values to represent ratios you are losing information (like the
> actual numbers of activations and inhibitions) so it is a major compression
> technique which compresses both the data and the mathematical function that
> uses the data. So I might use various ratios (of probability for example)
> to derive an evaluation of a 'conceptual index'.  There are certain
> mathematical series which can be expressed as relatively simple functions.
> But the functions combine addition and multiplication so the division
> between the two methods becomes an obstacle to the employment of them to
> resolve important computational problems. There are mathematical work
> arounds but they become so complicated that it does not look like ti would
> be effective from an amateur's point of view. I just had an interesting
> thought. You can use functions of varying ratios as a compression method.
> Or, since I envision my (conjectured) mathematical conceptual index as
> needing to use different 'recipes' of ratios between different kinds of
> conceptual evaluations, it might be very useful. Thanks for mentioning this
> idea.
> Jim Bromer
>
>
> On Mon, Jun 24, 2019 at 8:45 AM Brett N Martensen <[email protected]>
> wrote:
>
>> What you are discussing is neural coding mechanisms. As you are aware
>> spiking approaches use spike timing and spiking rates as one idea. I have
>> another idea. A neuron fires as a result of the sum of the number of
>> exciting synaptic connections minus the number of inhibitor connections
>> exceeding a threshold. If the number of synaptic connections from a single
>> source neuron is the log of a value then the neuron fires when a given
>> ratio of values is recognized.  So just the synaptic connections from two
>> source neurons is sufficient for a target neuron to fire. One source  uses
>> excitation connections and the other uses inhibition connections. This is
>> based on Log(A/B) = Log(A) - Log(B).  It converts ratios into subtraction
>> which is what you get when you sum the number of exciting and inhibiting
>> synapses.  I think one of the reasons few people use this idea is that
>> spikes are easily measured but counting the number of synaptic connections
>> is practically impossible without microscopic observation.
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
>> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
>> participants <https://agi.topicbox.com/groups/agi/members> + delivery
>> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
>> <https://agi.topicbox.com/groups/agi/T395236743964cb4b-M81b3979cf01d86a924588f90>
>>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T395236743964cb4b-M4a237c91ac005a642126d8c3
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to