Operating on compressed data without having to decompress it is the
goal that I am thinking of so being able to access internal relations
would be important. There can be some compressed data that does not
contain explicit internal relations but even then it would be nice to
be able to make modifications to the data without decompressing it. My
assumption is that the data would have some kind of internal relations
that were either implicit in the data or which might be a product of
the compression method.
The parts of the model that I am thinking about may contain functions to:
Compress data.
Transform compressed data into another compressed form without decompressing it.
Append additional data onto the previously compression without decompressing it.
Modify the data previously compressed without decompressing it.
Decompress the data.

If this model is seen as a general model that has the potential to do
any of these things then it becomes a model for any kind of system to
represent data and functions to act on the data. The model may not
itself be a viable application. It would be more like a purely virtual
model that needs further definition.
If a decompression method makes multiple passes to compress or
decompress data then it does contain an algorithm that effectively
transforms compressed data from one form to another without
decompressing it. I just read something about LZ and it makes multiple
passes to compress additional data onto what it had previously
compressed using the compression that was already in progress (so to
speak).
The interrelations of the data of the 'mind' of an AGI system would
have to be in a compressed form. The potential interrelations of such
a system would not all be explicitly represented even for those
interrelations that were explicitly indicated. I think that the
recognition that it is so might help clarify some problems that may be
encountered in thinking about AGI systems. If  nsome internal
relations are explicit or stored separately they may not make sense
taken out of some greater context.
I was thinking of logic when I started this thread. I am working on
compressions of logical formulas and one of the design parameters is
that I need functions to operate on those compressions without
decompressing.


Jim Bromer
On Sun, Oct 7, 2018 at 5:20 PM Jim Bromer <[email protected]> wrote:
>
> n-ary state computers that represent n-ary numbers constitute a
> lossless compression of any unary number (sticks or marks using the
> 'natural' representation of numbers) within the range of the extent of
> the numeration (which means I can't casually figure the range out in a
> few minutes.)
> Jim Bromer
>
> On Sat, Oct 6, 2018 at 9:24 PM Matt Mahoney via AGI
> <[email protected]> wrote:
> >
> > I understand the desire to understand what an AGI knows. But that makes you 
> > smarter than the AGI. I don't think you want that.
> >
> > A neural network learner compresses its training data lossily. It is lossy 
> > because the training data information content can exceed the neural 
> > network's memory capacity (as all learners should). Then it compresses the 
> > remainder effectively by storing as prediction errors. Learning simply 
> > means making whatever adjustments reduce the error.
> >
> > On Fri, Oct 5, 2018, 10:29 PM Ben Goertzel <[email protected]> wrote:
> >>
> >> Jim,
> >>
> >> If you look at how lossless compression works, e.g. lossless text
> >> compression, it is mostly based on predictive probability models ...
> >>
> >> If you have an opaque predictive model of a body of text, e.g. a deep
> >> NN, then it's hard to manipulate the internals of the model ...
> >>
> >> OTOH if you have a predictive model that is explicitly represented as
> >> (say) a probabilistic logic program, then it's easier to manipulate
> >> the internals of the model...
> >>
> >> So I think actually "operating on compressed versions of data" is
> >> roughly equivalent to "producing highly accurate probabilistic models
> >> that have transparent internal semantics"
> >>
> >> Which is important for AGI for a lot of reasons
> >>
> >> -- Ben
> >> On Sat, Oct 6, 2018 at 5:05 AM Jim Bromer via AGI <[email protected]> 
> >> wrote:
> >> >
> >> > A good goal for a next generation compression system is to allow
> >> > functional transformations to operate on some compressed data without
> >> > needing to decompress it first. (I forgot what this is called but
> >> > there is a Wikipedia entry on something s8milar in cryptography.)
> >> > This is how multiplication works by the way.
> >> >
> >> > If a 'dynamic compression' was preformed in stages using 'components'
> >> > which had certain abstract attributes that could be used in
> >> > computations that were done in multiple passes, then it might be
> >> > possible to postpone a complete analysis or computation until the data
> >> > was presented in a more abstract format (relative to the given
> >> > problem). The goal is to find a way to make each pass effective but
> >> > seriously less complicated. The idea is that the data 'components'
> >> > (the data produced by a previous pass) might have certain abstract
> >> > properties that were general, and subsequent passes might then operate
> >> > on narrower classes. (This is how many algorithms work now that I
> >> > think about it, but they are not described and defined using the
> >> > concept of compression abstractions as a fundamental principle.)
> >> > Jim Bromer
> >> 
> >> 
> >> --
> >> Ben Goertzel, PhD
> >> http://goertzel.org
> >> 
> >> "The dewdrop world / Is the dewdrop world / And yet, and yet …" --
> >> Kobayashi Issa
> >
> > Artificial General Intelligence List / AGI / see discussions + participants 
> > + delivery options Permalink

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T55454c75265cabe2-M887f8e21b224516d50acbf17
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to