The best compressors are very complex. They use hundreds or thousands of
independent context models and adaptively combine their bit predictions and
encodes the prediction error. The decompressor uses an exact copy of the
model trained on previous output to reconstruct the original data. Most of
the computing resources are used in modeling, so decompression takes as
long as compression. Actually a little longer because decoding can't be run
in parallel with prediction like encoding can.

On Mon, Nov 18, 2019, 2:23 PM <[email protected]> wrote:

> If you were matching the text in groups, it would be quicker than matching
> it at the letter level,  but yes, thats only if its made with speed in mind.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M6a4f925f49076c24d039aac0>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M7f50555314a96387902cf206
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to